text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Asked by: Replication preventing drop of linked server not used in replication. An hour ago I manually created a linked server of type SQL Server for temporary admin use. The linked server exists on a server that does have replication subscriptions and publications which were created several days ago. I just tried to remove the linked server and could not: EXEC master.dbo.sp_dropserver @server=N'myserver\myinstance', @droplogins='droplogins' Msg 20582, Level 16, State 1, Procedure sp_MSrepl_check_server, Line 31 [Batch Start Line 39] Cannot drop server 'myserver\myinstance' because it is used as a Publisher in replication. No, it isnt: select is_publisher, is_subscriber, is_distributor from sys.servers where name = 'myserver\myinstance'is_publisher is_subscriber is_distributor ------------ ------------- -------------- 0 0 0 Bug? How do I get rid of this now? Question All replies Digging into the cause of the error, the check being done by sp_MSrepl_check_server is as follows: -- Check to see if the server is a dist publisher if object_id('msdb.dbo.MSdistpublishers') is not null begin if exists (select * from msdb.dbo.MSdistpublishers where UPPER(name) = UPPER(@srvname) collate database_default) begin raiserror(20582, 16, -1, @srvname) return (1) end end So as a hack solution (since nothing else worked) I simply deleted the row from MSdistpublishers. No doubt this will leave things in some kind of fractured state, but clearly that was already the case, and at least this gets me past the blocking error. If anyone knows what the solution is "meant" to be, I'm still interested. - Hi allmhuran, >>If anyone knows what the solution is "meant" to be, I'm still interested. Your solution should be fine. Based on my research, it seems in most cases, the issue is related to an improper server rename, that could happen years ago before you encounter this error. Is that the case? If you have any other questions, please let me know.. - This worked for me as well. I was able to drop the linked server after deleting the row from table MSDistpublishers in <g class="gr_ gr_76 gr-alert gr_spell gr_inline_cards gr_run_anim ContextualSpelling" data-<g class="gr_ gr_88 gr-alert gr_gramm gr_inline_cards gr_run_anim Grammar only-ins replaceWithoutSep" data-msdb</g></g> database. Thank you very much for the solution.
https://social.msdn.microsoft.com/Forums/en-US/78df662a-81d2-454e-9ee9-0aa1068577cc/replication-preventing-drop-of-linked-server-not-used-in-replication?forum=sqlreplication
CC-MAIN-2019-09
refinedweb
369
55.03
#1 Members - Reputation: 122 Posted 06 October 2000 - 07:44 PM #2 Members - Reputation: 122 Posted 07 October 2000 - 05:21 PM SGI has recommended to the ARB that Magician be adopted as the OpenGL standard for java bindings, so that is one mark in its favor. You mentioned that you were interested in working through the Nehe tutorials. The Nehe tutorials have been ported to Java by Hodglimm (linked below), but he used the GL4Java API. I would imagine this is a mark in favor of using GL4Java. I am not aware of Nehe ports that use Magician. The Nehe tutorials and the Java port are linked from the FAQ. There is also an article with information on Magician but its somewhat outdated (from back when Magician was to be cancelled). I am not as familiar with GL4Java, so I can''t give you a personal impression. Magician has worked well for me. The performance is good and overall its very stable (though I did have some problems with Magician and earlier versions of my video card drivers). Magician does a really good job of leveraging Java''s OO capabilities while not straying too far from the standard OpenGL approach (as least in my opinion). #3 Members - Reputation: 122 Posted 07 October 2000 - 06:17 PM #4 Members - Reputation: 200 Posted 08 October 2000 - 03:24 PM 1 - Aquire OpenGL Red Book 2 - Find a demo that came with Magician such as openGL/logo.java 3 - Erase all code that is applet only ( init(), start()... ) 4 - Erase all code that makes the letters 5 - Erase all the comments at the top because they annoy me 6 - Keep the rest Basically what is left is what you would have if you were using GLUT with C/C++. You will have to erase a bunch of code, like the lighting and material stuff int the initialize() method, that the old app used but you should be able to see the basics of opengl especially if you follow step one. The book will make it alot easier. I am seriously thinking about setting up a small site for Magician demos, maybe some NeHe ports but I might not have a place for it next semester and I know little html. If you wait a week I MAY have some more help for you. I looked to see if I have some of the early Magician stuff I did but I wrote over them and they are a little messy to be useful. I will probably clean the stuff up and put up the first NeHe tut or one from the red book tomorrow. At the very least I will post the frame work for the basic Magician app. I wanrned you! Didn't I warn you?! That colored chalk was forged by Lucifer himself! #5 Members - Reputation: 122 Posted 09 October 2000 - 03:27 AM Magician''s implementation of OpenGL is not substantially different from standard OpenGL. In a couple of cases there are some minor variations, such as Magician using overloaded methods to handle various data type lengths rather than multiple methods with slightly different names. Any reference on OpenGL should do fine for you. The OpenGL SuperBible is another great starting place. It is going to be more difficult to learn OpenGL in Java than it would be to do so with C++. I am not aware of any books that teach OpenGL in Java - and I have looked for several months now. Its kind of depressing in a a way. I know of several resources for learning OpenGL in Delphi or Visual Basic, but few for Java. #6 Members - Reputation: 122 Posted 09 October 2000 - 10:47 AM i really appreciate them, GKW if you could tell me when you have any magician related material that would be great. #7 Members - Reputation: 200 Posted 09 October 2000 - 05:52 PM Jerry - Have you ever tried to do a JApplet with Magician? I get a can''t find GLEventListener class. AWT applets work fine. Not a big deal but I would like to know why it does not work. /* MagicianTemplate.java This app just draws a triangle and a rectangle but you can erase the drawing code and put your own in and you should have a working application. This application uses Swing so get JRE 1.3. Magician uses AWT canvas which is incompatable with the lightweight Swing components but work arounds exist. I hope this will be fixed in JRE 1.4 GKW */ import javax.swing.*; import java.awt.*; import java.awt.event.*; import com.hermetica.magician.*; public class MagicianTemplate extends JFrame implements GLEventListener, WindowListener { // Components of the Magician OpenGL state machine GL gl = null; CoreGL coregl = new CoreGL(); GLU glu = null; CoreGLU coreglu = new CoreGLU(); // The GLComponent is the canvas where the graphics will be rendered GLComponent glc = null; // Creates new form MagicianTemplate public MagicianTemplate() { // Sets up the the OpenGL state machine gl = coregl; glu = coreglu; setTitle( "Magician Template" ); } public static void main (String args[]) { MagicianTemplate mt = new MagicianTemplate(); // Initializes the GLComponent for drawing mt.setupGL(); } // Called once when the GLComponent is initialized public void initialize(final com.hermetica.magician.GLDrawable p1) { /* Sets the color that the color buffer is filled with during the glClear call in display() */ gl.glClearColor( 0.0f, 0.0f, 0.0f, 0.0f ); /* Depth buffer is now enabled It is not needed in this application but it gives you an idea of what can be put in initialize() */ gl.glEnable( GL.GL_DEPTH_TEST ); } // Handles the drawing public void display(final com.hermetica.magician.GLDrawable p1) { /* Clears the color buffer and the depth buffer. If you don''t clear them then the values from the last time display() was called will still be there and cause unintended results. */ gl.glClear( GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT ); /* Loads the identity matrix onto the stack. In this case it is the MODELVIEW matrix which was set in reshape(). */ gl.glLoadIdentity(); /* Moves all geometry after this call 2.0 units into the screen.*/ gl.glTranslated( 0.0d, 0.0d, -2.0d ); /* Saves the current matrix onto the stack to isolate the translation for the triangle from the quad. I did push the last translation onto the stack because I want both the triangle and the rectangle to be moved away from the viewer. */ gl.glPushMatrix(); /* Moves the origin -2.5 units along the x-axis */ gl.glTranslated( -2.5d, 0.0d, 0.0d ); /* Draws a triangle. Don''t forget the glEnd() or things will get funky. */ gl.glBegin( GL.GL_TRIANGLES ); gl.glVertex3d( 0.0d, 1.0d, 0.0d ); gl.glVertex3d( -1.0d, 0.0d, 0.0d ); gl.glVertex3d( 1.0d, 0.0d, 0.0d ); gl.glEnd(); /* Destroys the current matrix and replaces it with the last matrix pushed onto the stack. The origin is now restored to the earlier matrix when the origin was at ( 0.0, 0.0, -2.0 ). */ gl.glPopMatrix(); /* Since this application is so simple I don''t really need to save the current matrix again but I am. */ gl.glPushMatrix(); /* Move the origin 2.5 along the x-axis */ gl.glTranslated( 2.5d, 0.0d, 0.0d ); /* Draws a rectangle */ gl.glBegin( GL.GL_QUADS ); gl.glVertex3d( -1.0d, 1.0d, 0.0d ); gl.glVertex3d( -1.0d, 0.0d, 0.0d ); gl.glVertex3d( 1.0d, 0.0d, 0.0d ); gl.glVertex3d( 1.0d, 1.0d, 0.0d ); gl.glEnd(); /* Don''t really need this either for this application */ gl.glPopMatrix(); /* Kicks the machine into gear if it isn''t already. */ gl.glFlush(); } /* Called at the creation of the rendering canvas and when ever the canvas is resized in the GUI. */ public void reshape( GLDrawable component, int x, int y, int width, int height ) { /* Sets up the viewport */ gl.glViewport( component, x, y, width, height ); /* Makes the prjection matrix the current matrix */ gl.glMatrixMode( GL.GL_PROJECTION ); /* Load the identity matrix into the projection matrix just to make sure it is clean. */ gl.glLoadIdentity(); /* Sets up the view frustrum. Make sure the near and far clipping planes are positive. */ gl.glFrustum( -2.0, 2.0, -2.0, 2.0, 1.0, 5.0 ); /* Changes the matrix to the Modelview matrix. This is the matrix that moves and rotates your geometry. */ gl.glMatrixMode( GL.GL_MODELVIEW ); /* Load the identity matrix. Make sure the matrix is clean. */ gl.glLoadIdentity(); } /* Returns the state machine */ public GL getGL() { return( gl ); } public void windowDeactivated(final java.awt.event.WindowEvent p1) { } public void windowClosed(final java.awt.event.WindowEvent p1) { } public void windowDeiconified(final java.awt.event.WindowEvent p1) { } public void windowOpened(final java.awt.event.WindowEvent p1) { } public void windowIconified(final java.awt.event.WindowEvent p1) { } public void windowClosing(final java.awt.event.WindowEvent p1) { if( glc != null ) { /* This has to be called to release the memory associated with the rendering canvas. On my machine it is about 200k. */ glc.destroy(); } setVisible( false ); dispose(); System.exit( 1 ); } public void windowActivated(final java.awt.event.WindowEvent p1) { } /* Sets up the rendering canvas. */ public void setupGL() { /* GLDrawableFactory is the call you want to make to create a rendering canvas. There is a class called GLComponentFactory but that was used in an earlier version of Magician. This call sets the size of the canvas to 400 x 400. */ glc = (GLComponent)GLDrawableFactory.createGLComponent( 400, 400 ); /* Adds the rendering canvas to the JFrame. */ getContentPane().add( glc, BorderLayout.CENTER ); /* You have to make the rendering canvas visible before you call GLComponent.initialize() or an exception will be thrown. So do a setVisible( true ) or a pack(). */ pack(); show(); /* Gets the capabilities of the canvas. */ GLCapabilities caps = glc.getContext().getCapabilities(); /* Don''t need to set the depth bits for this app but here it is */ caps.setDepthBits( 8 ); /* Sets to RGBA pixels instead of color-indexed */ caps.setPixelType( GLCapabilities.RGBA ); /* Stores 32 per pixel. Does not change hardware settings. JRE 1.4 is supposed to support fullscreen java apps so is may change hardware setting then. */ caps.setColourBits( 32 ); /* Sets up double buffering for faster rendering. Does not matter in this case. */ caps.setDoubleBuffered( GLCapabilities.DOUBLEBUFFER ); /* Sets up the listeners for the app. If you don''t add the GLEventListener then the rendering canvas won''t do anything. */ addWindowListener( this ); glc.addGLEventListener( this ); /* Initializes the canvas. Make sure the GLComponent is visible on the screen or an exception will be thrown. */ glc.initialize(); } } I wanrned you! Didn't I warn you?! That colored chalk was forged by Lucifer himself! #8 Members - Reputation: 122 Posted 10 October 2000 - 05:26 AM I can''t see why subclassing a JApplet would cause the problem you mentioned. JApplet is heavy-wieght container isn''t it (I will need to look that up to be sure)? Nothing that I know of about subclassing the JApplet should interfere with implementing the GLEventListener interface... Like I said, I am not familiar with using Magician in applets, but if post some code I could take a look and at least brainstorm a little on what might be the problem. #9 Members - Reputation: 200 Posted 10 October 2000 - 07:46 AM I wanrned you! Didn't I warn you?! That colored chalk was forged by Lucifer himself! #10 Members - Reputation: 122 Posted 11 October 2000 - 08:16 AM it would sure help my sorry ass out. #11 Members - Reputation: 200 Posted 12 October 2000 - 06:01 PM java.lang.NoClassDefFoundError: com/hermetica/magician/GLEventListener" I wanrned you! Didn't I warn you?! That colored chalk was forged by Lucifer himself! #12 Members - Reputation: 122 Posted 13 October 2000 - 06:32 PM im sure you''ll help alot of people with your information regarding magician. some magician specific information would be great to start off. as to get those of use starting out like myself well versed in it before the plunge into opengl have you talked to gamedev regarding hosting the site ? #13 Members - Reputation: 200 Posted 15 October 2000 - 01:06 PM I wanrned you! Didn't I warn you?! That colored chalk was forged by Lucifer himself! #14 Members - Reputation: 122 Posted 15 October 2000 - 04:05 PM but i wanna get into anything that is graphics related right away #15 Members - Reputation: 200 Posted 16 October 2000 - 08:31 AM I wanrned you! Didn't I warn you?! That colored chalk was forged by Lucifer himself!
http://www.gamedev.net/topic/27590-magician-or-gl4java/
CC-MAIN-2016-18
refinedweb
2,051
65.93
plugin crashing Hi Robert, Your plugin in crashed the first time I used it! Here is a screenshot of the error message: This is Gimp 2.8.4 running in Ubuntu 13.04 Hope this is useful! Really looking forward to getting this going! Gimp: gimp 2.8.4-1ubuntu1 gimp-data 2.8.4-1ubuntu1 gimp-data-extras 1:2.0.1-3 gimp-dcraw 1.31-1.1 gimp-dds 2.0.9-3ubuntu1 gimp-dimage-color 1.1.0-3.1ubuntu1 gimp-flegita 0.6.2-1.1ubuntu3 gimp-gap 2.6.0+dfsg-3ubuntu1 gimp-gmic 1.5.1.6+dfsg-4build1 gimp-gutenprint 5.2.9-1ubuntu1 gimp-help-common 2.6.1-1 gimp-help-en 2.6.1-1 gimp-lensfun 0.2.1-1ubuntu1 gimp-plugin-registry 5.20120621ubuntu2 gimp-resynthesizer 0.16-3 gimp-texturize 2.1-2 gtkam-gimp 0.1.18-1 libgimp2.0 2.8.4-1ubuntu1 libgimp2.0-doc 2.8.4-1ubuntu1 Python: python2.7 2.7.4-2ubuntu3.2 python2.7-dev 2.7.4-2ubuntu3.2 python2.7-minimal 2.7.4-2ubuntu3.2 python3 3.3.1-0ubuntu1 python3-apport/raring-updates uptodate 2.9.2-0ubuntu8.3 python3-apt 0.8.8ubuntu6 python3-aptdaemon 1.0-0ubuntu9 python3-aptdaemon.gtk3widgets 1.0-0ubuntu9 python3-aptdaemon.pkcompat 1.0-0ubuntu9 python3-commandnotfound/raring-updates uptodate 0.3ubuntu7.1 python3-dbus 1.1.1-1ubuntu3 python3-defer 1.0.6-2 python3-distupgrade/raring-updates uptodate 1:0.192.13 python3-gdbm 3.3.1-0ubuntu2 python3-gi 3.8.0-2 python3-minimal 3.3.1-0ubuntu1 python3-pkg-resources 0.6.34-0ubuntu1 python3-problem-report/raring-updates uptodate 2.9.2-0ubuntu8.3 python3-software-properties 0.92.17.3 python3-uno 1:4.0.2-0ubuntu1 python3-update-manager/raring-updates uptodate 1:0.186.2 python3-xkit 0.5.0ubuntu1 python3.3 3.3.1-1ubuntu5.2 python3.3-minimal 3.3.1-1ubuntu5.2 libboost-python1.49.0 1.49.0-3.2ubuntu1 libpython-dev 2.7.4-0ubuntu1 libpython-stdlib 2.7.4-0ubuntu1 libpython2.7 2.7.4-2ubuntu3.2 libpython2.7-dev 2.7.4-2ubuntu3.2 libpython2.7-minimal 2.7.4-2ubuntu3.2 libpython2.7-stdlib 2.7.4-2ubuntu3.2 libpython3-stdlib 3.3.1-0ubuntu1 libpython3.3 3.3.1-1ubuntu5.2 libpython3.3-minimal 3.3.1-1ubuntu5.2 libpython3.3-stdlib 3.3.1-1ubuntu5.2 python 2.7.4-0ubuntu1 python-appindicator 12.10.1daily13.04.15-0ubuntu1 python-apt 0.8.8ubuntu6 python-apt-common 0.8.8ubuntu6 python-aptdaemon 1.0-0ubuntu9 python-aptdaemon.gtk3widgets 1.0-0ubuntu9 python-avahi 0.6.31-1ubuntu3 python-cairo 1.8.8-1ubuntu4 python-central 0.6.17ubuntu2 python-chardet 2.0.1-2build1 python-cups 1.9.62-0ubuntu3 python-cupshelpers 1.3.12+20130308-0ubuntu4 python-daemon 1.5.5-1ubuntu1 python-dbus 1.1.1-1ubuntu3 python-dbus-dev 1.1.1-1ubuntu3 python-debian 0.1.21+nmu2ubuntu1 python-defer 1.0.6-2 python-dev 2.7.4-0ubuntu1 python-eggtrayicon 2.25.3-12 python-gconf 2.28.1+dfsg-1build1 python-gdbm 2.7.4-0ubuntu1 python-gi 3.8.0-2 python-glade2 2.24.0-3ubuntu1 python-gnome2 2.28.1+dfsg-1build1 python-gnomekeyring 2.32.0+dfsg-2ubuntu1 python-gobject 3.8.0-2 python-gobject-2 2.28.6-11 python-gst0.10 0.10.22-3ubuntu1 python-gtk-vnc 0.5.1-1ubuntu2 python-gtk2 2.24.0-3ubuntu1 python-gtksourceview2 2.10.1-2build1 python-gtkspell 2.25.3-12 python-ibus 1.4.2-0ubuntu2 python-libuser 1:0.56.9.dfsg.1-1.2ubuntu2 python-libvirt 1.0.2-0ubuntu11.13.04.4 python-libxml2 2.9.0+dfsg1-4ubuntu4.3 python-lockfile 1:0.8-2ubuntu1 python-minimal 2.7.4-0ubuntu1 python-mpd 0.3.0-4 python-notify 0.1.1-3ubuntu1 python-pkg-resources 0.6.34-0ubuntu1 python-pycurl 7.19.0-5ubuntu8 python-pyorbit 2.24.0-6ubuntu3 python-pysqlite2 2.6.3-3 python-qt4 4.10-0ubuntu3 python-qt4-dbus 4.10-0ubuntu3 python-requests 1.1.0-1 python-sip 4.14.5-0ubuntu1 python-six 1.2.0-1 python-smbc 1.0.13-0ubuntu4 python-software-properties 0.92.17.3 python-support 1.0.15 python-tagpy 0.94.8-4 python-urlgrabber 3.9.1-4ubuntu2 python-urllib3 1.5-0ubuntu1 python-vte 1:0.28.2-5ubuntu1 python-xdg 0.25-2 python-xklavier 0.4-4 Re plugin crashing Hi rdav, Its nice to have some news about that plug-in. Since you have many installed Python packages my first try at a solution is to specify which 'pygtk' to import. So open 'autosave_a.py' with an editor, change line 31 and insert the 'pygtk.require('2.0')' line after. Like below: 29 ... 30 import gtk, shelve, pango 31 import gettext, pygtk -> pygtk.require('2.0') 32 from gobject import timeout_add_seconds 33 ... If this don't work, keep it (will not break anything), I will need a more specific error message then . To obtain it start GIMP in a terminal. Hi Robert, Hi Robert, Thanks for the rapid reply! The edit you suggested didn't resolve the error, the autosave_a.py file was edited thus: import gettext, pygtk pygtk.require('2.0') terminal output with unedited plugin: $ gimp (gimp:24806): Gimp-Widgets-CRITICAL **: gimp_device_info_set_device: assertion `(info->device == NULL && GDK_IS_DEVICE (device)) || (GDK_IS_DEVICE (info->device) && device == NULL)' failed Traceback (most recent call last): File "/home/rd/.gimp-2.8/plug-ins/autosave_a.py", line 171, in shelf_fl = Save_recall() File "/home/rd/.gimp-2.8/plug-ins/autosave_a.py", line 63, in __init__ if os.path.isfile(sys_file(self.file_shelf)): self.manage_folder_exist() File "/home/rd/.gimp-2.8/plug-ins/autosave_a.py", line 88, in manage_folder_exist if not os.path.exists(sys_folder): File "/usr/lib/python2.7/genericpath.py", line 18, in exists os.stat(path) TypeError: coercing to Unicode: need string or buffer, NoneType found After editing the plugin, the menu item "Plugings-Python" disappeared. terminal output from edited plugin: $ gimp (gimp:24939): Gimp-Widgets-CRITICAL **: gimp_device_info_set_device: assertion `(info->device == NULL && GDK_IS_DEVICE (device)) || (GDK_IS_DEVICE (info->device) && device == NULL)' failed Traceback (most recent call last): File "/home/rd/.gimp-2.8/plug-ins/autosave_a.py", line 172, in shelf_fl = Save_recall() File "/home/rd/.gimp-2.8/plug-ins/autosave_a.py", line 64, in __init__ if os.path.isfile(sys_file(self.file_shelf)): self.manage_folder_exist() File "/home/rd/.gimp-2.8/plug-ins/autosave_a.py", line 89, in manage_folder_exist if not os.path.exists(sys_folder): File "/usr/lib/python2.7/genericpath.py", line 18, in exists os.stat(path) TypeError: coercing to Unicode: need string or buffer, NoneType found (gimp:24939): LibGimpBase-WARNING **: gimp: gimp_wire_read(): error For completeness, terminal output with no plugin: $ gimp (gimp:24743): Gimp-Widgets-CRITICAL **: gimp_device_info_set_device: assertion `(info->device == NULL && GDK_IS_DEVICE (device)) || (GDK_IS_DEVICE (info->device) && device == NULL)' failed cheers Re1 plugin crashing Hi rdav, Thank you, a very complete observation. From that it seems adding the line 'if sys_folder == None: continue' after line 86 is prudent. Like below. 85 ... 86 sys_folder = sys_file(f[keyV]['dir_BU']) -> if sys_folder == None: continue 87 if keyV[:6] == 'recall': 88 ... Hope it goes to the next problem! Regards Re2 plugin crashing Hi rdav, A possible explanation for the plug-in behavior is that your current folder when you called it is not local (possibly in the 'cloud'). It was constructed under the assumption of local folders only and the protection against non local one is missing! Thanks for bringing it to my attention. If you can change the current active folder to a local one if it was not. Then it is possible to test that explanation by deleting 'autosave1.cfg', if there, in sub-folder '.../plug-ins/autosave_a' before starting GIMP? Regards Re3 plugin crashing Hi rdav, A nice setup, as for the GIMP devs this plug-in is not there first responsibility. In my Linux GIMP Python console: >>> import gtk >>> temp = gtk.FileChooserDialog() >>> print("the current folder: "+temp.get_current_folder()) the current folder: /home/robert In your case, I think from your feedback, the above 'print...' line will generate a 'Traceback', which means that gtk.FileChooserDialog() is unable to comply so it return 'None'. This gtk.FileChooserDialog() is use, in many place, by the plug-in to select the backup folder. To ascertain that 'os.path' is not as confuse as 'gtk.FileChooserDialog()' by your file system; please verify that "/home/rd/.gimp-2.8/plug-ins/autosave_a" folder (not file) is there and was not before using 'autosave_a.py'? If yes, it may be worth to update the plug-in, but not sure because it seems due to a lack of support by Ubuntu-13.04 to legacy GTK2. Regards Hi Robert, Hi Robert, Gimp folder is already local, in home directory as set up with standard Linux install. In the terminal output I shared with you has the directory, as in: File "/home/rd/.gimp-2.8/plug-ins/autosave_a.py", line 171, /home,/rd is the local directory and "rd" is my user name! I don't run fancy stuff like folders on cloud, other networked machines or external hard drives. The hard drive is partitioned like thus: /, /home, swap. Only thing really different to standard Linux install is that /home and swap are encrypted. Also as the / and /home share 125GB on a SSD harddrive on a laptop, the swap is 12GB sd-card to free up a bit of space. Linux runs this lappie pretty well, I only wish it could cool itself a lot better! Thanks for your help and diligence in making this available and your efforts toward this are impressive in the light of not even the Gimp devs have figured it out! cheers Hi Robert, Hi Robert, Yep, solved the error dialog, still no config dialog, Terminal output: $ gimp (gimp:6141): Gimp-Widgets-CRITICAL **: gimp_device_info_set_device: assertion `(info->device == NULL && GDK_IS_DEVICE (device)) || (GDK_IS_DEVICE (info->device) && device == NULL)' failed Traceback (most recent call last): File "/usr/lib/gimp/2.0/python/gimpfu.py", line 807, in _run res = apply(func, params[1:]) File "/home/rd/.gimp-2.8/plug-ins/autosave_a.py", line 908, in autosave_a co_au = Control_Autosave(img) File "/home/rd/.gimp-2.8/plug-ins/autosave_a.py", line 452, in __init__ self.set_config() File "/home/rd/.gimp-2.8/plug-ins/autosave_a.py", line 474, in set_config self.label0.set_text(mess) TypeError: Gtk.Label.set_text() argument 1 must be string, not None cheers Autosave Changes as Frames I just notice this script i didn't try yet, maybe already the configuration file may be used to got what i wish: Save a copy of the image at each change where " change" may be the application of a filter, or a stroke with a brush tool , or anything similar that modify the visible "Save as " using the original file name as base (or asking for a name) + a postfix starting from 000001 (if the name chosen is "Cool Image", output would be Cool Image_000001.xcf,Cool Image_000002.xcf, and so on) A similar option would make easier record speed-paintings as animations, create and record as video the evolution from WIP to polished work, for tutorial and so on Do you think possible include a similar option ? Re: Autosave Changes as Frames Hi PhotoComiX, Looking at your two wishes: 1) 'Save a copy for each change ...': This in my opinion will needed a new structure for the plug-in, if I interpret correctly, and this is not planned. Now the plug-in saves on a basis of regular time interval not on internal image change. Under that provision it is possible to save only open file witch have there dirty flag set (GIMP's window title starting with *) at that time by the existing configuration choice, 'All changed'. Now the dirty flag is not reset by the plug-in after such a backup. 2) '... using the original file name as base ...': Here after naming your image "Cool Image" in GIMP and opening the plug-in: the configuration choice 'Launching one' will nearly do it (backup files: 'BU-ID1-Cool Image-1.xcf', 'BU-ID1-Cool Image-2.xcf', and so on). I use GIMP for photo retouching and no experience in speed-paintings or few in animation; so it's influence me. Perhaps the maximum of 'nr kept' is not enough, the minimum for time interval too large or the file extension choice insufficient? For wish 2) it seems very possible. Pages
http://registry.gimp.org/node/26115/delete
CC-MAIN-2016-40
refinedweb
2,117
55.2
EDITORIAL.....more.........more Kashmir is not Islamic war or jihad Men, Matter, Memories By M L Kotru No matter what the outcome of Foreign Secretary Chokila Iyer's meeting in Colombo last week with her Pakistani......more Communist contortions of nationalism....... Yours Randomly, By Dr R L Bhat Indian nationalism which had been an incipient undercurrent of the sub-continental psyche received its highest..more Militancy and children in J&K By Daya Sagar As regards the State of J&K the Armed Conflicts need to be seen in reference to the ongoing insurgency and militancy. .....more Implications of downgrading Indias sovereign railings By S. V. Vaidyanathan The downgrading of Indias sovereign rating by Standard and Poors followed ........more. It is a common belief that Dr Guru, the one time high priest of 'separatism', would not have died had he been under security protection. As for the other leaders they have been able to function only because of the exertions of the security agencies. Of course, all deaths have not been, could not be, prevented but the fact cannot be denied that but for their presence many more prominences would have been wiped out by the same marauders. And, the free expression they are enjoying is because of these very exertions. Do the militants allow any free expressions to their own 'leaders', much less the fractious leaders who deal in subterfuges? Many may say that it is providence protecting them though, at the same time, they would never hesitate asking more and more security covers. They would do well to keep account of the personnel who have been killed in the militant attacks upon them. That way they may at least pay some of their due debts to these men, who have sacrificed their lives to protect them. Of course, their debts to the Indian nation will never be paid, because they are simply overwhelming. They can however, stop adding to these debts by being less political and more sincere in their assertions against India. There is a school of thought that holds that the 'separatist leaders' would never dream of seceding from India. They are fully aware how free, how equitious, how caring the Indian dispensation is. They are also cognizant of how chilling the Pak situation is, with its dictatorships, general mayhem and restricted freedoms. The separatist rhetoric, explains the school, has two objectives. First and foremost is to gain prominence, support and 'constituencies', both within and outside the State. The second objective is that by constantly raising the bogey they coerce the center to yield higher quantums of assistance on the one hand on the other escape any proper reckoning of their actions and performances. Of course, it is when they assume power, and every outfit comes to do that in due course. That is why, says this analysis, they never go the whole hog, would never go the whole hog. They can never gain prominence and power in Pakistan that easily. And, few can be sure that they would not sooner be hanged from the Lahore forte, as Farooq Abdullah says. Indeed, much of the talk of 'regional and local aspirations' in the whole of India is a comouflage for personal promotions as much as a contrivance to achieve those ends. Whole-scale parties have exploited the people and the nation thus to rise to power. When these same 'aspirations' find their support base growing they invariably become 'national parties'. From NTR to Mulayam Singh there is one long line of politicians who have thus deluded the people and done the nation much harm in the process. In this State this short cut to power and prominence has taken really dangerous dimensions, with the leaders ending up faning whole disaffections because the innocent people, who do not see through the 'assertions' of the clever leaders, take them on their word and end up with a destabilised psyche. When the right time comes the 'leaders' without any qualms take the reigns of power, because they 'know' what they have been fighting for. But the people get badly dispirited. And, the ground is ripe for the 'next' crusader to step in, to exploit the people, the State and the nation. Without any qualms, of course.. Instead, they have been fed to a war that serves only Pak interest and pampers its ego of having somehow 'taught India a lesson' or as Musharraf put it 'taken revenge upon India'. As the recent report details, thirteen thousand of them have been killed since 1990. Five hundred have died in just the inter-gang rivalries that also claimed about two hundred of their relatives. But when the mad-hatters die, they never die alone. The thirteen thousand took another ten thousand civilians lives with them and three thousand security men. Twenty six thousand deaths from a people who would not, earlier, go near a gun may be an achievement for Pakistan, but what does it bring to Kashmiris? Once in early nineties a Kashmiri young man asked his militant peer: how come the same gun that he had been given to fight a jihad with, was in the hands of an Assamese fighting against Bangladesh immigrants and in that of a Naga fighting for a tribalism. The answer was typical: it is not for you to ask questions. None asked questions and arms, huge piles of it, were slipped into the Valley. Twenty-two thousand AK type guns have so far been recovered, ten thousand pistols, one thousand machine guns. The general impression is that this is just he tip of the proverbial iceberg, that lots more remain dumped there. Indeed what was once an immaculate paradise replete with sagacious men and svelte wome is a powder keg ready to explode literally-- so much of explosive power is hidden there. So far 23 tonnes of explosives and 52 quintals of RDX have been recovered. How much more lies strewn around is anybody's guess. The supplies have been massive; the inventory has been equally varied. Everything from cordex wire to rocket boosters and pilotless - craft has been seized in colossal quantities. All that has shown its effects in the deathly destruction that has been there. A quarter lakh deaths because a mullah somewhere wants to gain access to 'heaven', a general there wants to gain power and keep it, a people there has gone deluded enough not to think of nor approve of any other way to direct their national effort and energies. Pakistan has been doing what the composite delusions there demand, but do the Kashmiris need to be willing tools for carrying them out? Over the past ten years Pakistan has purchased arms worth 4.4 billion dollars from the open markets and sent them across the border to the willing agents of its egoistical, hallucinatory binges. And given them death and destruction in its wake. Of course, the situation in Pakistan itself is no better. They have themselves recovered three lakh rifles during the past year or so. That State is riven with conflicts strewn with guns and gang-wars, but why did the peaceful Kashmiris need to import that culture? To taste death it its foulest fang? Kashmir is not Islamic war or jihad Men, Matter, Memories By M L Kotru No matter what the outcome of Foreign Secretary Chokila Iyer's meeting in Colombo last week with her Pakistani counterpart, Prime Minister Atal Behari Vajpayee did well earlier to set the record straight so far as his agenda in any future talks with the Pakistani military ruler Gen Pervez Musharraf goes. It was vintage Vajpayee that afternoon in Parliament, when, days after the inquisitors had their say, accusing him of everything short of a sell-out, he spelt out his version of Agra in clear-cut tones. In the process he explained what he made of Musharraf's naivette, what his own expectations had been and how he felt let down in the end. If some thought that it was the usual soft-pedalling of what was perhaps a diplomatic disaster at Agra, Vajpayee was very clear in his mind that the reality was far from that perception. Predictably he did not shut the door on resumption of talks with Musharraf but he made no bones about what the agenda would be whenever they meet next. It could not be Kashmir, Kashmir, Kashmir. A future dialogue on Indo Pak relations could not be held hostage to Musharraf's whims. There is much more at stake in Kashmir than merely winning or losing media wars. Kashmir is not an Islamic war or jihad. It has a direct bearing on the future of the sub-continental polity. It is for Pakistan to decide whether it wants to use Kashmir as a bridge or continue to make it a bone of contention. Either way, Vajpayee appeared to suggest, that there would be no solution to the problem on terms sought to be laid down by Musharraf and his men. The problem with Musharraf is that he speaks with two voices. One voice is obviously intended to earn brownie points internationally, the one that seeks to project him as a man of peace, willing to go the extra mile along the path of reconciliation with an inimical India. The other voice is fine-tuned to remain in step with the mullahs and radicals within his country which forces him to take the line that there is no terrorism in Kashmir and that Pakistan has nothing to do with cross-border terror. For the benefit of the internationally community he has kept up the charade of local elections a lead up to higher levels nationally. The local elections may have been held on a non-party basis to his consternation most grassroots political parties are back in place. That certainly will not prevent Musharraf from going ahead with raising an Ayub Khan-like electoral college to finally gain domestic legitimacy for himself or he may even do a Zia by going ahead with a non-party election at the top layer as wel and then handpick a Prime Minister who (like Zia's Junejo) could be kicked out any time. Pakistani military dictators have not been known to accept court to orders nor even the pledges they themselves make. Musharraf, for the record, has another year to usher in democracy in his country. The question is what kind of democracy is it going to be. If he is to stick to his earlier Attaturkian posturing he must keep the mullahs out of the reckoning. Yes, that will mean continued reliance on the military in matters of State but that' the only way he can do an Attaturk. The mullahs and the clergy would have to be sidelined from the electoral process in that event. Can he afford to do it? He can, if his commanders stay loyal and he shows the necessary ruthlessness to put down the mullahs. But given the Kashmir backdrop it seems unlikely he can get the mullahs and their jihadi groups off his back. He needs them if he has to keep the Kashmir pot boiling. Only the other day one of the better known Pakistani Jihadi leaders was hectoring India on Kashmir on the American CNN. As a counter to ''Indian barbarity'' in Kashmir his own and other jihadi groups, said the beard on the idiot box, the jihadis would strike in the Indian heartland, target top Indian political and military leaders. And he said it with the straightest of faces. For the rest you have to listen to the rantings of the Lahore-based Dawarul Irshad and its armed wing, Lashkar-e-Toiba or to the top leaders of Harkatul Mujahideen and the Jamaat-e-Islami. They given you the impression that Pakistani is about to launch a massive terrorist assault all over India. One of the Jihadi outfits has welcomed the emergence in India of fanatic fringe organisations like the Bajrang Dal, the Shiv Sena and the VHP promising to one day confront the Hindu chauvnists against the might of Islamic forces. One doesn't have to really refute the absurd position of these Hindu ''organisations'', but the fact is that their noise level is inversely proportional to their marginalised position in the overall Indian picture. In Musharraf's Pakistan the Jihadis have spread their tentacles all over the land and what's more they seem to be prospering. And more disturbingly for us, they seem to be pushing in as many mercenaries into Jammu and Kashmir as they can. The Jihadi rhetoric at the moment appears to have gained much ascendancy across the border and its ramifications are becoming visible in Jammu and Kashmir with terrorists throwing acid on innocent women students, insisting on men sporting beards and wearing salwar kameez a la Taliban. How absurd must all this appear to young Kashmiri men and women when they see, courtesy PTV, their Pakistani counterparts (lumpen apart) in Lahore and Karachi sporting trendy clothes, doing song and dance routines, walking down the ramps. And this finally brings me to the ''bold'' and ''dramatic''- ''draconian'', if you will-decisions taken in New Delhi the other day about how to tackle terrorist activity. If merely by declaring certain parts of the State as a disturbed area the problem of terrorism could be solved then the problem in the Kashmir province should have ended many years ago when district after district was declared a disturbed area. By extending the ''disturbed'' net to Jammu province, I am afraid, the problem will not be solved. Only you will give an incompetent State Government further tools to entrench itself in power. Under a similar earlier dispensation in Kashmir a unified command comprising the Army, Border Security Force, CRPF etc has already been in existence for long. Top commanders of the security forces form the core of the command centre with the Chief Minister at its head. What we have seen of the functioning of the command has been hardly encouraging. For one thing, a peripatetic Farooq Abdullah is usually unable to chair the unified command meetings; second, inter-service rivalries, dictated more by rank than substance, rarely permit cohesive action. I don't see the situation changing in Jammu. If the Army is to be at the heart of counter-terrorist activity it follow that it should have the major say in policy, of course, not to the exclusion of the views of other para or civilian forces engaged in the operation. The Chief Minister should come in only when the political aspects of a particular operation are involved. For the rest, once the task has been identified, the execution should be left to the field commanders. The Security Forces should in no case be converted into handmaidens of the discredited political leadership in the State. We have to remember that Pervez Musharraf, the commando, has given no indication of his willingness to rein in the terrorists. His drum-beaters have been going round the world once again plugging the line that terrorism in Kashmir is in reality a freedom movement and that no Pakistanis are engaged in it. Were it really an indigenous problem we would not have been bombarded day after day with those fiery jihadi invocations from the Pakistani soil and aired (with nauseating fudging of video tapes) by PTV with unfailing regularity. If the terrorist threat in Jammu and Kashmir is to be countered effectively the unified command will have to become a reality. Military decisions should be left to the men from the military. The State government should concentrate on addressing the problems faced by the common people which it unfortunately has failed to do so far. I have a gut feeling that the raising of additional village defence committees, hopefully better armed, too, is not going to solve the problem. It will become another vehicle for politicians to promote their vested interests. Communist contortions of nationalism....... Yours Randomly, By Dr R L Bhat Indian nationalism which had been an incipient undercurrent of the sub-continental psyche received its highest assertion during the first half of the twentieth century. Most probably, the Muslims would have been an inalienable part of it. Indeed, the Urdu Pess till 1920 comes through a strong supporter of the nationalist urge. By then it probably caught up with the high priests of religious-nationalism Sir Syed et. al. and by forties when freedom looked imminent, became a rabid communalist organ for propagation of whatever Jinnah proclaimed over the Muslim League platforms. All this can counter to the nationalist fervor that was forcing the imperialism out of the country. Its clear aim was to stem the rising nationalism. Though the realization of the nationalist urges was too deep to be drowned by this effort, it did succeed in denting the edge. A section of the people was finally persuaded not to be a part of the nationalism. Of course, the greatest beneficiaries of this fracture were the English rulers. They had been fostering, if not actually orchestering, this effort from the days of Aligarh movement. The other movement that came to subvert the idea of Indian nationalism during the forties and following decades was communism. This was a movement that was actually suppressed by the British in its formative years. But by the forties a realignment of the forces in Europe had taken place. Stalin, who at one time had been an ally to Hitler, became a partner in the Anglo-American effort against the axis powers after Germany invaded Russia. The Indian communism received orders to support the empire and the faithful followers shifted their alliance. Actions of the communists during this time against the freedom struggle have been well documented. But the contortions this movement imposed upon Indian nationalism after independence have not received equal notice. Over the past half a century they have been undermining the idea of Indian nationalism, ridiculing it, loosening every brick in its foundation as if in a grand conspiracy. It is said that they have mostly been well-meaning souls with good of the Indian people at their hearts but then, as the well known axiom says, more calamities have been brought about in this world by sincere intensions than wicked ones. They did it all from the high pedestal of an ideology. Their motivation was promotion of the ideology that they thought good rather than wholesome, unprejudiced pursuit of knowledge. That was an age when the communist ideology was synonymous with liberation, humanism, emancipation... indeed, all that has been good in the human thought. The oppression of the centuries weighed heavy on the minds of the people. Poor came to be easily perceived as an entity, a class. Generations of thinkers have slid down this slope to land in a milieu that rejected society, its norms and conventions, nationalism, its premises and parameters, in favour of an ideology that was supposed to be universal. When it was not clear communism, it was socialism or a loose leftist leaning. People tended to diistinguish the three but they were only phases, stages along the steep climb up the pure Marxist ideology. Only it was neither pure nor an ideology. Its purity became suspect when Commintern, the international communist organization became an organ for advancing Russian interests. Its ideology betrayed itself when it aligned with fascism, Nazism and then imperialism for short-term benefits. But its blithe adherents in here, and in many other places too, saw none of these. They didn't see the clear rejection of universalism when the Russian and Chinese influences (interests) carved the Indian communists into two mutually antagonist groups. But the most glaring blindness was shown with respect to the national here. When communist movement failed to become a nation-wide force it set about fanning regional and sectarian indentities to gain pockets of influence. All the 'regional leaders' who have over the years come to represent local aspirations have been stranded there by he communist, socialist or leftist boats. They remain covert negators of the Indian nationalism. On the other hand, thousands of other communist 'recruits', academicians, philosophers, social scientists embarked upon a vast project to belabor every symbol of national identity. Indian society which had resisted the Islamic and Christian wedges, had fought the British efforts to split it for their imperial ends, was cleaved vertically by those soft spoken seers who saw no good anywhere there. There were negative influences in the Indian society for sure, but it had a far greater positive content. It is this positive fund that has sustained this civilization on its long history and prevented its being effaced like its fellow travelers. It was also the reason why the communism had not succeeded here. So its foot soldiers took to dismantling the society, bit by bit, part by part, all through its languages, cultures, social structure, literature and art, philosophy... everything that came handy. Indian philosophy was 'proved' to be nothing', the society 'exploitative', the mores were 'shown' to be 'wrong' and every achievement was soundly trashed. The truth that contemporaneous societies and polities in other parts of the world were nowhere near the Indian standards but were downright brutal was never told. The communist effort simply castigated the Indian society, culture and ethos. It could not but dampen the nationalism. It did so. The nationalism that was so emphatic at the start of independence has almost receded into that incipient mode that had seen the nation divided into countless 'identities' and to become easy prey for invaders to trample it under their feet. If today the nation is searching for elements to assert its idea, it is thanks to these denigrators who have caused that fervor to be dissipated, knowingly or in ignorance. Militancy and children in J&K By Daya Sagar As regards the State of J&K the Armed Conflicts need to be seen in reference to the ongoing insurgency and militancy. More particularly after 1989-90. The Psychological, Physical and Social angles should be kept in view while looking at the Child Groups in J&K. They are from those parents who are migrants like majority of Kashmiri Pandits who have left Kashmir Valley, Hindus who have left their homes in Districts of Doda, Udhampur, Rajouri and upper reaches; as well as those including even Muslims of Kashmir Valley who have left remote villages and come to the towns of Kashmir Division of J&K. This category is under influence of terror. They suffer from poor health. They are in many cases under Psychological depression. The environment and memories of the times of their displacement have in some cases polluted their minds socially. Those who are still living back in the areas under regular armed attacks by the insurgents, militants and actions by the security forces too are adversely affected. They suffer from :- (a) Heavy stress (b) Depressions (c) Poor Health (d) Increased infant deaths (e) Mothers under heavy fear psychosis there by retarding the growth of babies both physically and mentally. Children who get injured or hit as a result of on going armed conflicts and belong to poor parents from remote area are worst sufferers since medical aid is remotely available to them and hence the handicaps produce more affects on them and they have to bear it for decades ahead. The sick children fail to get immediate medical attention due to restrictions imposed on movement due to insurgency and combat operations; and more particularly when located in far-flung and backward areas. Even high fever could make child lose his life or may become physically or mentally disabled for life. And cases have surely been there. Children of poor parents are mostly the one who suffer physically and educationally. The resourceful parents have mostly moved the children from disturbed areas (even out side J&K) and have secured quality education for such children. But those from poor parents have stayed back in militancy affected areas where schools remain closed and they do not have peace at home for the studies as required in today's competitive world. The worst affected are children of poor parents (b) Children from Backward areas (c) Children from Far flung Areas (d) Children from Rural areas (e) Orphans (f) Those living among one particular religious community (g) Those who see their nears and dears being simply on the basis of theppir community or religion. The worst damage to Innocent Minds (Children) lies in... life long hatred in their mind against those who made them to leave their home, or killed their parents, or subjected them to physical stress and particularly when it turns towards those innocents who unfortunately belong to the religion or the community of the militants or insurgents thereby damaging the social matrix of the society. And similarly the minds of the young of other community get polluted when there is no one near them to protect them from the might and wrong information campaigns of the insurgents against the members of the society as well as the security forces who may belong to other community. They have been made handicapped physically, educationally, economically and in their capacity to build their future. They have been socially pushed back, those who have lived in 12 years of armed conflicts (acts) have very less capacity to take to good economic means of living and there are all chances for exploitation of such Grown up Children by the militants, criminals and religious fundamentalists. The problem has to be hence seen more in backward, rural, poor, illiterate and innocent people who have been much exploited by their fortunes and could still be exploited by those who have no regard for Human Values. Implications of downgrading Indias sovereign ratings By S. V. Vaidyanathan The downgrading of Indias sovereign rating by Standard and Poors followed by a revision of the foreign currency rating outlook for corporate "dadies" such as Reliance Industries, Indian Oil Corporation, NTPC and Larsen & Toubro, is a warning signal to the mandarins of the North Block. What is new in the S&Ps rating? And in Moodys? Continued fiscal slippages at the Centre and in the States this is not news nor the plunge in tax revenue collections in the first quarter this year. Who does not know that the Vajpayee government is too deeply caught in day-to-day fire-fighting to spare much time for further reforms in the economy? Nor is Moodys perturbation with the emergence of scams in the financial sector so much of a profoundly unknown dimension of the Indian economy. The reported reactions of the Finance Minister, Mr. Yashwant Sinha, and the Governor of the Reserve Bank of India, Dr. Bimal Jalan, and the not-so-coincidental "dismissive" mode of the stock and forex markets, are not mere manifestations of some bravado or sheer unwillingness to listen to the voices of the credit-rating agencies. There is no flaw in the perception that the fundamentals in the economy continue to be strong relative to the actualities of many other developing countries including the erstwhile "tiger economies". Forex reserves of $ 43.6 billion are a "big deal" for a country that faced the ignominy of an external debt default, only about ten years ago! A GDP growth rate that may cross 6 per cent this year and an inflation rate of just around 6 per cent in terms of the Consumer Price Index, are not precisely the pointers to a meltdown! With the data system in the economy becoming much more transparent than in the pre-1991 period, credit-rating itself, perhaps, is becoming a "sunset profession." The policymakers may not be far wrong in treating S&Ps, Moodys and company and their qualitative assessments of the economy as being premature or even tendentious. Yet, it would be foolhardy for the policy establishment not to see in these "downgrading messages" a growing apprehension among international investment analysts that India is rapidly forfeiting its credentials as an emerging "favourite" among investment destinations in the developing world. The Enron tangle is enough to proclaim India as a classic example of "collapse of a reform agenda," regardless of who, between Enron and the Maharashtra Electricity Board, is "more wrong". If the truth is that the FDI scenario for India is quite bleak, we do not need S&Ps, to "break it" for us! Leave alone the FDI component in the external sector. The tidings from the export front are not too comforting. When the U.S. economy catches a cold, the world is set on a bout of sneezing thus goes the new wisdom on globalisation. Indias exports to the U.S. account for hardly 2 per cent of the total imports of the U.S. Yet, to the extent that for India, these amount to 20 per cent of its global exports, any shrinkage presages a disproportionate order of dislocation to its own economy. The slowdown in the U.S. economy and its reverberations across the developed world obviously are bound to impact adversely on the Asian economies as well. The first quarter data covering April-June 2001 for the South Korea, Thailand, Singapore, Malaysia and the Philippines, Japan constitute a mega-crisis by itself with the government largely benumbed by the economic slowdown, not knowing where to draw the line between a cheap money policy and an expansionist fiscal stance. The first two months of the current fiscal year, April and May, showed Indias exports growing at a deflated rate of around 5 per cent per annum, in marked contrast to 30 per cent in the corresponding period in 2000-01. The provisional data for June, now available show that there was an actual decline by 4.6 per cent so much so the exports during the first quarter have seem a dismal slide by around 1.8 per cent. The Centre for Monitoring Indian Economy (CMIE ), Mumbai, has now projected total exports for 2001-02 at $ 47.5 billion a growth by 7 per cent as against the target of 18 per cent set by the Union Ministry of Commerce. While the sceptics have a point in forecasting a meltdown in exports in view of a perceived recession in the U.S. and in Europe, it is not implausible that the lacklustre export performance of India during April-June has had more to do with domestic economic uncertainties than with the U.S. economy hiccups. Although preliminary indications are that the slowdown in Indian exports has covered the whole gamut of exports agricultural commodities, engineering goods, textiles, handicrafts and gems and jewellery a closer examination of the phenomenon by the export promotion councils and the commodity boards could perhaps yield meaningful clues as to how the reversal of a year of buoyant growth at 21 per cent has occurred and that too so abruptly. In a dynamic global trade situation, aberrations rather than deep-seated disequilibria can easily confuse the exporting community and the policymakers. Given the fact that there has been a lot of vacuous politicking during the last three months and little of pragmatic implementation of the proposals set out in Mr. Sinhas budget for 2001-02, the economy itself would appear to be in a deep freeze, with industrial production continuing to chug at a "3 per cent minus" rate and consumer demand remaining subdued. It is an altogether depressing scenario with manufacturing enterprises laying low and the investment outlook continuing to dampen the entrepreneurs. That the export horizons have been clouded owing to the overall inertia in the economy and the dominant mood of "wait and see" needs to be underscored. The perennial question of export competitiveness involving transaction costs, the exchange rate of the rupee and the range of issues relating to the infrastructure, particularly the turnaround performance of ports, continues to call for coordinated action. Nothing much has occurred in the matter of special export processing zones (SEZs) which the Union Minister for Commerce, Mr. Muraroli Maran, rightly advocates as the thrust area of export policy for the New Decade. The fact is that not many State governments are alive to export growth as a powerful stimulus to the regional economy. With all its political "fire-fighting" missions, the Vajpayee Government is perilously close to achieving the dubious distinction of a non-performing government !INAV | home | state | national | business | editorial | advertisement | sports | | international | weather | mailbag | suggestions | search | subscribe | send mail |
http://www.dailyexcelsior.com/01aug18/edit.htm
crawl-002
refinedweb
5,411
58.01
A Subtle Case Sensitivity Gotcha with Regular Expressions Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems. - Jamie Zawinski For other people, when confronted with writing a blog post about regular expressions, think “I know, I’ll quote that Jamie Zawinski quote!” It’s the go to quote about regular expressions, but it’s probably no surprise that it’s often taken out of context. Back in 2006, Jeffrey Friedl tracked down the original context of this statement in a fine piece of “pointless” detective work. The original point, as you might guess, is a warning against trying to shoehorn Regular Expressions to solve problems they’re not appropriate for. As XKCD noted, regular expressions used in the right context can save the day! If Jeffrey Friedl’s name sounds familiar to you, it’s probably because he’s the author of the definitive book on regular expressions, Mastering Regular Expressions. After reading this book, I felt like the hero in the XKCD comic, ready to save the day with regular expressions. The Setup This particular post is about a situation where Jamie’s regular expressions prophecy came true. In using regular expressions, I discovered a subtle unexpected behavior that could have lead to a security vulnerability. To set the stage, I was working on a regular expression to test to see if potential GitHub usernames are valid. A GitHub username may only consist of alphanumeric characters. (The actual task I was doing was a bit more complicated than what I’m presenting here, but for the purposes of the point I’m making here, this simplification will do.) For example, here’s my first take at it ^[a-z0-9]+$. Let’s test this expression against the username shiftkey (a fine co-worker of mine). Note, these examples assume you import the System.Text.RegularExpressions namespace like so: using System.Text.RegularExpressions; in C#. You can run these examples online using CSharpPad, just be sure to output the statement to the console. Or you can use RegexStorm.net to test out the .NET regular expression engine. Regex.IsMatch("shiftkey", "^[a-z0-9]+$"); // true Great! As expected, shiftkey is a valid username. You might be wondering why GitHub restricts usernames to the latin alphabet a-z. I wasn’t around for the initial decision, but my guess is to protect against confusing lookalikes. For example, someone could use a character that looks like an i and make me think they are shiftkey when in fact they are shıftkey. Depending on the font or whether someone is in a hurry, the two could be easily confused. So let’s test this out. Regex.IsMatch("shıftkey", "^[a-z0-9]+$"); // false Ah good! Our regular expression correctly identifies that as an invalid username. We’re golden. But no, we have another problem! Usernames on GitHub are case insensitive! Regex.IsMatch("ShiftKey", "^[a-z0-9]+$"); // false, but this should be valid Ok, that’s easy enough to fix. We can simply supply an option to make the regular expression case insensitive. Regex.IsMatch("ShiftKey", "^[a-z0-9]+$", RegexOptions.IgnoreCase); // true Ahhh, now harmony is restored and everything is back in order. Or is it? The Subtle Unexpected Behavior Strikes Suppose our resident shiftkey imposter returns again. Regex.IsMatch("ShİftKey", "^[a-z0-9]+$", RegexOptions.IgnoreCase); // true, DOH! Foiled! Well that was entirely unexpected! What is going on here? It’s the Turkish İ problem all over again, but in a unique form. I wrote about this problem in 2012 in the post The Turkish İ Problem and Why You Should Care. That post focused on issues with Turkish İ and string comparisons. The tl;dr summary is that the uppercase for i in English is I (note the lack of a dot) but in Turkish it’s dotted, İ. So while we have two i’s (upper and lower), they have four. This feels like a bug to me, but I’m not entirely sure. It’s definitely a surprising and unexpected behavior that could lead to subtle security vulnerabilities. I tried this with a few other languages to see what would happen. Maybe this is totally normal behavior. Here’s the regular expression literal I’m using for each of these test cases: /^[a-z0-9]+$/i The key thing to note is that the /i at the end is a regular expression option that specifies a case insensitive match. /^[a-z0-9]+$/i.test('ShİftKey'); // false The same with Ruby. Note that the double negation is to force this method to return true or false rather than nil or a MatchData instance. !!/^[a-z0-9]+$/i.match("ShİftKey") # false And just for kicks, let’s try Zawinski’s favorite language, Perl. if ("ShİftKey" =~ /^[a-z0-9]+$/i) { print "true"; } else { print "false"; # <--- Ends up here } As I expected, these did not match ShİftKey but did match ShIftKey, contrary to the C# behavior. I also tried these tests with my machine set to the Turkish culture just in case something else weird is going on. It seems like .NET is the only one that behaves in this unexpected manner. Though to be fair, I didn’t conduct an exhaustive experiment of popular languages. The Fix Fortunately, in the .NET case, there’s two simple ways to fix this. Regex.IsMatch("ShİftKey", "^[a-zA-Z0-9]+$"); // false Regex.IsMatch("ShİftKey", "^[a-z0-9]+$", RegexOptions.IgnoreCase | RegexOptions.CultureInvariant); // false In the first case, we just explicitly specify capital A through Z and remove the IgnoreCase option. In the second case, we use the CultureInvariant regular expression option. Per the documentation, By default, when the regular expression engine performs case-insensitive comparisons, it uses the casing conventions of the current culture to determine equivalent uppercase and lowercase characters. The documentation even notes the Turkish I problem.. It may be that the other regular expression engines are culturally invariant by default when ignoring case. That seems like the correct default to me. While writing this post, I used several helpful online utilities to help me test the regular expressions in multiple languages. Useful online tools - provides a REPL for multiple languages such as Ruby, JavaScript, C#, Python, Go, and LOLCODE among many others. - is a Perl REPL since that last site did not include Perl. - is a regular expression tester that uses the .NET regex engine. - allows testing regular expressions using PHP, JavaScript, and Python engines. - allows testing using the Ruby regular expression engine.
http://haacked.com/archive/2016/02/29/regex-turkish-i/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+haacked+%28you%27ve+been+HAACKED%29
CC-MAIN-2017-04
refinedweb
1,087
58.69
Take the following code: #include <stdlib.h> int main(int argc, char **argv, char **envp) { int sum; atoi(&argv[1]);//statement with no effect. Passing argument 1 of ‘atoi’ from incompatible pointer type atoi(&argv[2]);//statement with no effect. Passing argument 1 of ‘atoi’ from incompatible pointer type sum = argv[1]+argv[2]; //error: invalid operands to binary + return sum; } On the lines with atoi(&argv[integer]) give me two warnings, statement with no effect, and passing argument 1 from atoi incompatible pointer type. the line where I try to place the two argv members into a single integer by using addition gives me the error "invalid operands to binary". Can anyone please explain to me why the comipler gives me this error and those four warnings? Thank-you.
https://www.daniweb.com/programming/software-development/threads/130587/problem-using-atoi-for-my-argv-argument
CC-MAIN-2017-17
refinedweb
132
51.89
Converts a string made up of UTF-8 characters and converts it to upper case. #include "slapi-plugin.h" unsigned char *slapi_UTF-8STRTOUPPER(char *s); This function takes the following parameter: A NULL terminated UTF-8 string to be converted to upper case. This function returns a NULL terminated UTF-8 string whose characters are converted to upper case. Character which are not lower case are copied as is. If the string is not considered to be a UTF-8 string, this function returns NULL. The output string is allocated in this function, and needs to be released when it is no longer used.
https://docs.oracle.com/cd/E19693-01/819-0996/aaiog/index.html
CC-MAIN-2018-05
refinedweb
104
63.7
I just started to get a grasp of classes. I can't find my answer anywhere, and I hope someone on here can answer it. I am trying to simlate a battle USING CLASSES Everywhere I look on the net people in C++ don't use classes to simulate battles. You could write the code in C. So I can't find any good examples! I want to pass two DIFFERENT objects in one function. Note: The objects are under the same class. If I make no sense, just ignore me :-x Edited by dadam88: n/a #include <cstdlib> #include <iostream> using namespace std; class player { private: char name[50]; public: int hp, atk; int playernum; void getinfo() { cout << "Please enter your username:\t"; cin >> name; } void spitinfo() { cout << "Player "<< playernum <<" is:\t" << name << endl; cin.get(); } void getstats() { cout << "Player "<<playernum<< "'s stats are:"; cout << "HP : "<<hp<<" ATK : " << endl; } void fight(int p1,int p2) // I would like to run player 1 and player 2 so i can do p1atk p2atk p1hp p2hp { cout << "Player "<< playernum << " is attacking player " << playernum << endl; cout << "Player "<< playernum << " dealt " <<atk << " damage!" << endl; } }; int main() { player one; player two; one.playernum = 1; two.playernum = 2; one.hp = 50; one.atk = 10; two.hp = 100; two.atk = 5; one.getinfo(); two.getinfo(); one.spitinfo(); two.spitinfo(); one.getstats(); two.getstats(); one.fight(1,2);// To pass them into a function, I want to pass person.one and person.two data through this function and then convert there data into other variables to make the code clearer and easier cin.get(); return EXIT_SUCCESS; } I know it probabley look's retarded, but hey I have to start somewhere, not sure about inheritance...new to classes :-D Thanks! Edited by dadam88: n/a In this case, if the two objects are of the same class, then you can fairly easily just pass one object to a method called on the other and act on both objects in that method. In code: class player { .. all the stuff you posted .. public: void fight(player& opponent) { hp -= opponent.atk; opponent.hp -= atk; }; }; int main() { .. all the stuff you posted .. one.fight(two); .. }; I assumed that atk means attack (power) and hp means health points. On that, I would recommend that you try and use more complete names rather than acronyms for variable names, this will help you in the long-run for debugging and understanding your own code (and for us to understand better too). >>not sure about inheritance...new to classes :-D Well, if you don't feel ready to tackle inheritance just yet, and that's understandable, just know that you shouldn't try to program anything too big in object-oriented programming before reaching the point of being able to use inheritance and all the advantages of it (it's essential in OOP). It's ok to start step by step, but you might want to get into inheritance soon rather than later. Anyway, putting inheritance aside, one thing you could do in this case, that is typical of games or simulators, is to use a managing class. In your battle scenario, I would suggest you make a class called "battle" whose data members are the players and other data such as whose turn it is, etc. This way your main function just creates a battle object, adds two players to it and then calls fight(). I know it looks like it doesn't make much of a difference, and you're right, but when adding inheritance and the polymorphism that comes with it, it makes all the difference in the world. I wrote out your code a bit cleaner and included a constructor to remove some of the clutter in main(). I also added a few comments saying briefly what is going on but for the most part I kept what you had. #include <iostream> using namespace std; class Player { string name; int hp, dmg, playerNum; public: Player(){}; //default constructor Player( int, int, int ); //make a constructor so you can assign health dmg and playerNum right away void SetName(); void GetName(); void GetStats(); void Fight( Player& ); //take in the reference of the Player so you can modify their hp }; Player::Player( int health, int damage, int num ) { hp = health; dmg = damage; playerNum = num; } void Player::SetName() { cout << "Please enter Player " << playerNum << "'s username:\t"; cin >> name; cout << endl; } void Player::GetName() { cout << "Player " << playerNum << " is:\t" << name << endl; cin.get(); } void Player::GetStats() { cout << "Player " << playerNum << "'s stats are:" << endl; cout << "HP: " << hp << " DMG: " << dmg << endl << endl; } void Player::Fight( Player &opp ) { opp.hp -= dmg; cout << "Player " << playerNum << " is attacking player " << opp.playerNum << endl; cout << "Player " << playerNum << " delt " << dmg << " damage!" << endl << endl; } int main() { Player one(50, 10, 1), two(100, 5, 2); //hp, dmg, playerNum one.SetName(); two.SetName(); one.GetName(); two.GetName(); one.GetStats(); two.GetStats(); one.Fight(two); two.GetStats(); //display health change cin.get(); return 0; } If you have any questions about this feel free to ask and I'll try my best to explain. Thank you so much guys void fight(player& opponent) //Can you explain this bit of code? I am just able to create a new player on the spot called opponet and link it the two's address? { hp -= opponent.atk; opponent.hp -= atk; } void Player::Fight( Player &opp )//<----This calls two into the function and it's at the address of opp?{ opp.hp -= dmg; cout << "Player " << playerNum << " is attacking player " << opp.playerNum << endl; cout << "Player " << playerNum << " delt " << dmg << " damage!" << endl << endl;} @stfuo I should stick with constructors when ever I deal with classes I take it, look's a lot cleaner. I highlighted in red what I'm a little bit confused with, everythign else is really clear! Thanks! Pretty much what happens is that two's reference gets stored into opp when it gets passed into the function Fight() and when you modify opp it directly changes two because points to the place in memory where two is located. If you were to not pass by reference (pass by value) instead of taking storing the reference of two it now just stores the value and whatever you do to that value will not effect the original variable (in this case two). In the case we have opp pretty much becomes two and anything you do to it will change two directly. I know I typed a bunch of stuff that could probably be summarized in a few lines but I hope this clears it up a bit. I understand but not fully YET Sure I will as a little time goes on. Thanks for everything guys (I think both of yall are guys from yoru avatars!), I should be posting something maybe once a week, like a code. I posted my first real app called DiceGame, it's lame but hey, applying my newbie programming skills. THIS THREAD IS SOV ...
https://www.daniweb.com/programming/software-development/threads/302387/running-different-clases-in-one-function
CC-MAIN-2017-17
refinedweb
1,154
70.53
Exercises - Type in and run the five programs presented in this chapter. Compare the output produced by each program with the output presented after each program in the text. Which of the following are invalid variable names? Why? Int char 6_05 Calloc Xx alpha_beta_routine floating _1312 z ReInitialize _ A$ Which of the following are invalid constants? Why? 123.456 0x10.5 0X0G1 0001 0xFFFF 123L 0Xab05 0L -597.25 123.5e2 .0001 +12 98.6F 98.7U 17777s 0996 -12E-12 07777 1234uL 1.2Fe-7 15,000 1.234L 197u 100U 0XABCDEFL 0xabcu +123 Write a program that converts 27° from degrees Fahrenheit (F) to degrees Celsius (C) using the following formula: C = (F - 32) / 1.8 What output would you expect from the following program? #include <stdio.h> int main (void) { char c, d; c = 'd'; d = c; printf ("d = %c\n", d); return 0; } Write a program to evaluate the polynomial shown here: 3x 3- 5x 2+ 6 for x = 2.55. Write a program that evaluates the following expression and displays the results (remember to use exponential format to display the result): (3.31 x 10 -8x 2.01 x 10 -7) / (7.16 x 10 -6+ 2.01 x 10 -8) To round off an integer i to the next largest even multiple of another integer j, the following formula can be used: Next_multiple = i + j - i % j For example, to round off 256 days to the next largest number of days evenly divisible by a week, values of i = 256 and j = 7 can be substituted into the preceding formula as follows: Next_multiple = 256 + 7 - 256 % 7 = 256 + 7 - 4 = 259 Write a program to find the next largest even multiple for the following values of i and j:
http://www.informit.com/articles/article.aspx?p=2246402&seqNum=6
CC-MAIN-2018-13
refinedweb
296
70.84
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. How to override the onchange_product_tmpl_id with new api? Hi, I'm new to Odoo, and I need to override a existing method. What i need is to insert new lines on Bill Of Material tree, on changing product template id. There are just a method on mrp_bom class, called onchange_product_tmpl_id, so according with what i understood, i need to inherit the mrp_bom class in a new class, and override the onchange_product_tmpl_id method with a new one. How can I override the method with new api (the old super() in OpenERP) and fire the onchange method when ai change the product template id? Something like that: class bom_template(models.Model): _inherit = 'mrp.bom' @api.onchange def _populate_bom_lines(self): lines =[] for val in self.bom_template: #value on a BOM template that cointain products line_item = { 'attr1': val.name, 'attr2': val.a1, ... } lines += [line_item] self.update({'line': lines}) Hello my friend; here is some useful links: and here is an example: Python: def on_change_prix_id(self, cr, uid, ids,list_price, prix_vente_minimum, context=None): if list_price < prix_vente_minimum: My_error_Msg = 'Attention!! Votre Prix de vente est inférieur au prix de vente minimum' raise osv.except_osv(_("Error!"), _(My_error_Msg)) return True else: return True XML: <field name="list_price" on_change="on_change_prix_id(list_price, prix_vente_minimum)">!
https://www.odoo.com/forum/help-1/question/how-to-override-the-onchange-product-tmpl-id-with-new-api-73893
CC-MAIN-2016-50
refinedweb
236
66.33
On Sun, Feb 22, 2009 at 05:51:58PM -0500, Alex Converse wrote: >: > > Thanks, > Alex Converse [...] > @@ -871,24 +875,45 @@ static int decode_spectrum_and_dequant(AACContext * ac, float coef[1024], GetBit > } > > static av_always_inline float flt16_round(float pf) { > +#ifdef HAVE_IEEE754_PUN > + union float754 tmp; > + tmp.f = pf; > + tmp.i = (tmp.i + 0x00008000U) & 0xFFFF0000U; > + return tmp.f; > +#else > int exp; > pf = frexpf(pf, &exp); > pf = ldexpf(roundf(ldexpf(pf, 8)), exp-8); > return pf; > +#endif > } > what is the point of the silly redundancy? I would replace the old code by the new, not keep both with #ifdefs thats not good for readability. Of course thats unless you do find a system where the frexpf()/ldexpf is faster but i doubt such system exists [...] --: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-February/065885.html
CC-MAIN-2014-42
refinedweb
118
73.78
From: David Abrahams (dave_at_[hidden]) Date: 2007-06-24 15:14:21 on Sat Jun 23 2007, "Gennadiy Rozental" <gennadiy.rozental-AT-thomson.com> wrote: > "David Abrahams" <dave_at_[hidden]> wrote in message > news:87vede48j6.fsf_at_grogan.peloton... >> >> > > Will do. > >>> Also I believe in many other places I am using type& within unnamed >>> namespace. This seems to be valid usage, right? >> >> You've left out too much detail for me to make a determination. > > This is essentially boost/test/trivial_singleton.hpp: > > #define BOOST_TEST_SINGLETON_INST( inst ) \ > namespace { BOOST_JOIN( inst, _t)& inst = BOOST_JOIN( inst, > _t)::instance(); } > > Here inst ## _t is type of singleton and inst is a reference to the > instance If your question is whether you are allowed to use references inside unnamed namespaces, the answer is yes. I note that you are still using dynamic initialization above. Using brace-initialization is important to avoid order-of-initialization problems. -- Dave Abrahams Boost Consulting The Astoria Seminar ==> Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2007/06/123835.php
CC-MAIN-2020-29
refinedweb
174
51.55
When I was deploying my new social shopping site Wantbox a couple of month ago, I discovered that I couldn’t use the Django print command in my live Dreamhost (Passenger WSGI) environment like this: print "Hello world!" As a new site, I was still debugging a few persistent nits and really needed to print out messages when certain areas of the code were accessed. A few sites recommended that I try the Django debug toolbar, but I was having trouble getting that to work reliably in my Dreamhost setup. I searched around a bit more and after consuming many web pages, finally discovered a dead simple solution using the built-in Python logging module. Python Logging in Django, Step-by-Step: - Open your “settings.py” file and add this code to the bottom: # setup logger import logging PROJECT_DIR = os.path.dirname(__file__) PARENT_DIR = os.path.dirname(PROJECT_DIR) logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', filename=os.path.join(PARENT_DIR, 'django.log'), filemode='a+')</pre> - Create a file called “django.log” at the same level as your project, my setup looks like: . .. .htaccess django.log passenger_wsgi.py passenger_wsgi.pyc public tmp wantbox (my django project) - In the view where you need to “print” add the following: import logging logging.debug("hello world!") logging.info("this is some interesting info!") logging.error("this is an error!") Now you can do… tail -f django.log …and see every logging message that you (and your third-party apps) write. So easy, yet still took this Django rookie a good day of Google searching to figure out. Here’s hoping I’ve saved at least one of you from my fate. Helpful Resources: - StackOverflow: Debugging Django/Python on Dreamhost - Simon Willison’s Weblog: Debugging Django - Python logging documentation - Django on Dreamhost UPDATE! Check out the suggestions from the Django pros in the comments for some nice improvements to this simple logging example! 15 Responses to “Basic Python Logging in Django 1.2: Debugging Made Simple” When you use %(name)s in your logging format config, the name of the logger is shown in every logitem. Example: # project/package/file.py import logging logger = logging.getLogger(__name__) logger.debug(‘Foo’) Would result in something like this in your logfile: TIMESTAMP – project.package.file – DEBUG: Foo How about storing logs in following directory structure? logs/date/info.log logs/date/error.log logs/date/message.log – – Always Remain Technically Updated. When application is growing it is much better to use logging.conf option, and define different loggers not in the code but in the logging.conf config file. So through the application different modules can have different loggers, logging levels etc. Have a look at this: Django has much better and more reliable logging support (essentially based on standard Python logging) For a person who’s never used python logging, and loves django, this is GREAT! Thank you for posting. Or you could try using Sentry: I have just recently taken one step further than the basic logging setup you describe. That’s exactly what I used until recently when I spent a little time reading the python logging module docs. The coolest thing I added was setting up an “SMTPHandler” to add to my basic file logger. I use this for “critical” error logging that I want to be notified about. You can add it to the default logger as another handler, and set its level to error or critical to only email you serious issues. So your call to logging.critical(…) will then log to the file, as normal, and also send you an email. Also check out the logging.exception() call. Very handy. Great tips Chris, your comment is a nice addition to the post. One nice feature is to “Easily trace an entire request, even on a busy site where multiple requests are logging concurrently” — see Django request logging. on April 21st, 2011 at 9:02 am # […] Basic Python Logging in Django 1.2: Debugging Made Simple | Mitch Fournier (tags: django logging) […] This is seriously the best resource on debugging Django under Dreamhost. you should add it to the page under the 500 error handling with Passenger. Your solution works, and is so much cleaner. Thanks! Very nice straight forward and quick logging set up. Probably perfect for the current project I am on. Thanks for posting. WOW just what I was looking for. Came here by searching for django
http://mitchfournier.com/2011/03/02/basic-python-logging-in-django-1-2-debugging-made-simple/
CC-MAIN-2020-05
refinedweb
743
57.77
Debugging on mbed BLE This article reviews a few of the debugging techniques that you can use when writing applications with the BLE API on mbed boards. We’ll look at using LEDs, the interface chip and third-party sniffers to debug applications. The quick method: LEDs Most boards come with at least one LED that can be controlled using the standard mbed API. Turning the LED on or off, or flashing it, is a quick method of knowing that we’ve reached a certain state. For example: We can turn on an LED when the board starts up by making it the first action of main(). This helps us know that our board is alive and running our program. You’ll find this function in some of the samples on our site. We can flash an LED when we enter an error handler. This tells us that we’re in trouble. We can turn on an LED when running the background activity of the program in main(), for example with waitForEvent(). We’ll turn it off whenever an interrupt handler pre-empts main(). If the LED never turns on again, it means that main()never got back control and we are trapped in the interrupt handler. For more information about handlers, see our discussion on event driven programming. LEDs require almost no coding and processing, giving them near-zero overhead. Here’s an example of creating an LED object and turning it on and off: #include <mbed.h> DigitalOut led(LED1); // DigitalOut is part of the standard mbed API ... somewhere later ... /* writing 1 to an LED usually turns it on, * but your board might want 0 rather than 1. */ led = 1; ... or perhaps in some other file ... extern DigitalOut led; led = 0; The LED error() utility The mbed SDK includes a utility called error(). It takes in printf()-style parameters, but its output is an LED pattern that is easily identified as an alert. This gives visual error indication without need to write with printf() (as we do below). It is used for runtime errors, which are errors caused by: Code trying to perform an invalid operation. Hardware that cannot be accessed because it is malfunctioning. For more information about error(), see the handbook. Debugging with the mbed interface chip Most mbed platforms come with an interface chip placed between the target microcontroller (in our case: the BLE microcontroller) and the development host (our computer). It is a USB bridge from the development host to the debugging capabilities available in ARM microcontrollers. This bridge functionality is encapsulated in a standard called CMSIS-DAP. Major toolchain vendors have started supporting this standard, so we expect it to grow in popularity and availability over time. Note: some smaller boards reduce size and cost by not carrying an interface chip. If you’re using one of those boards, you can skip to the next section. By using the interface chip we can debug with: printf()and its associated capabilities. pyOCD. The development host uses a USB connection with the interface chip to debug the microcontroller. Some of the terms in this image will be clarified later in the document. Printf() Programs typically use the printf() family to communicate something readable. - This printf() traffic can be viewed with a terminal program running on the host. Tip: The following examples use the CoolTerm serial port application to read the printf() output, but you can use any terminal program you want and expect similar results. Tip: The UART protocol requires that the sender and receiver each maintain their own clocks and know the baud rate. mbed interface chips use the 9,600 baud rate and your terminal program should be set to that baud rate to intercept the communication. printf() doesn’t come free - it exerts some costs on our program: An additional 5-10K of flash memory use. Do note, however, that this is the cost of the first use of printf()in a program; further uses cost almost no additional memory. Each call to printf()takes a significant time for processing and execution: about 100,000 instructions, or 10 milliseconds, depending on the clock speed. This is only a baseline: printf()with formatting will cost even more. If your clock runs slowly (as most microcontrollers’ clocks do) and your computational power is therefore lower, printf()can sometimes cost so much it’s actually used as a delay. These two costs require that we use printf() judiciously. First, because there is limited code-space on the microcontroller’s internal flash. Second, because it delays the program so much. Be particularly careful about using it in an event handler, which we expect to terminate within a few microseconds. Note:* printf() doesn’t require that you tell it beforehand how many parameters it should expect; it can receive any number you throw at it. To do this, you need to provide a format string with format specifiers, followed by a matching number of arguments. For example, printf(“temp too high %d”, temp): the format string is “temp too high %d”, and the format specifier is %d. The last bit is the argument: temp. It matches the format specifier %d, which specifies an integer. You can learn more on Wikipedia. Using printf() on mbed requires including the stdio header: #include <stdio.h> ... some code ... printf("debug value %x\r\n", value); Here’s a very basic example. In the UriBeacon program, we’ve added printf() in three places (this is too much for a real-life program): After setting DEVICE_NAME, we’ve added printf("Device name is %s\r\n", DEVICE_NAME); After startAdvertisingUriBeaconConfig();we’ve added printf("started advertising \r\n");. After ble.waitForEvent();we’ve added printf("waiting \r\n");. This is the terminal output. Note that “waiting” is printed every time waitForEvent is triggered: Printf() macros There are some nifty tricks you can do with printf() using macro-replacement by the pre-processor. The general form for a simple macro definition is: #define MACRO_NAME value This associates with the MACRO_NAME whatever value appears between the first space after the MACRO_NAME and the end of the line. The value constitutes the body of the macro. printf()s are very useful for debugging when looking for an explanation to a problem. Otherwise, it is nice to be able to disable many of them. We can use the #define directive to create parameterized macros that extend the basic printf() functionality. For example, macros can expand to printf()s when needed, but to empty statements under other conditions. The general form for defining a parameterized macro is: #define MACRO_NAME(param1, param2, ...) {body-of-macro} For example, it is often useful to categorise printf() statements by severity levels like ‘DEBUG’, ‘WARNING’ and ‘ERROR’. For this, we define levels of severity. Then, each time we compile or run the program, we specify which level we’d like to use. The level we specified is used by our macros in an if condition. That condition can control the format of the information the macro will print, or whether or not it will print anything at all. This gives us full control of the debug information presented to us every run. Remember that printf() can take as many parameters as you throw at it. Macros support this functionality: they can be defined with ... to mimic printf()’s behaviour. To learn more about using ... in your code, read about variadic macros on Wikipedia. Here is an example: -- within some is printed__); } } Here’s another example of macro-replacement that allows a formatted printf(). Set #define MODULE_NAME "<YourModuleName>" before including the code below, and enjoy colourised will use error() (a part of the mbed SDK that we reviewed earlier). error() not only flashes LEDs, it also puts the program into an infinite loop, preventing further operations. This will happen if the ASSERT() condition is evaluated as FALSE: #define ASSERT(condition, ...) { \ if (!(condition)) { \ error("Assert: " __VA_ARGS__); \ } } Fast circular log buffers based on printf() When trying to capture logs from events that occur in rapid succession, using printf() may introduce unacceptable run-time latencies, which might alter the system’s behaviour or destabilise it. But delays in printf() aren’t because of the cost of generating the messages. The biggest cause of delay with printf() is actually pushing the logs to the UART. So the obvious solution is not to avoid printf(), but to avoid pushing the logs to the UART while the operation we’re debugging is running. To avoid pushing during the operation’s run, we use sprintf() to write the log messages into a ring buffer (we’ll explain what that is in the next paragraph). The buffer holds the debugging messages in memory until the system is idle. Only then will we perform the costly action of sending the information through the UART. In BLE, the system usually idles in main() while waiting for events, so we’ll use main() to transmit. sprintf() assumes a sequential buffer into which to write - it doesn’t wrap strings around the end of the available memory. That means we have to prevent overflows ourselves. We can do this by deciding that we only append to the tail of the ring buffer if the buffer is at least half empty. In other words, so long as the information already held by the buffer doesn’t exceed the half-way mark, we will add new information “behind” it. When we reach the half-way point, we wrap-around the excess information to the beginning (rather than the tail) of the buffer, creating the “ring” of a ring buffer. Half is an arbitrary decision; you can decide to let the buffer get three-quarters full or only a tenth full. Here is an example implementation of a ring buffer. We’ve created our own version of a wrapping printf() using a macro called xprintf(). Debug messages accumulated using xprintf() can be read out circularly starting from ringBufferTail and wrapping around ( ringBufferTail + HALF_BUFFER_SIZE). The first message would most likely be garbled because of an overwrite by the most recently appended poor; } } pyOCD-based debugging (GDB server) Note: using GDB (or any other debugger) to connect to the GDB server is useful only if we have access to the program symbols and their addresses. This is currently not exported when building .hex files using the mbed online IDE. We therefore need to export our project to an offline toolchain to be able to generate either an .elf file that holds symbols alongside the program, or a .map file for symbols. In the following section, we’re assuming an .elf file. So far, we’ve connected the interface chip and the target microcontroller using UART. But there is another connection between the two: serial wire debug (SWD). This protocol offers debugging capabilities for stack trace analysis, register dumps and inspection of program execution (breakpoints, watchpoints etc). When combined with a source-level debugger on the development host, such as the GNU Project Debugger (GDB), SWD offers a very rich debugging experience - much more powerful than printf(). Tip: GDB is often “too rich” - don’t forget the fast efficiency of printf() and the LEDs. The interface chip implements CMSIS-DAP. To drive the CMSIS-DAP interface chip over USB, you’ll need to install the pyOCD Python library on the development host. To install pyOCD, follow the instructions to get the external USB libraries pyOCD relies on. Notes: You’ll need to run setup.py for both the USB libraries and pyOCD. You can follow HOW_TO_BUILD.md to see how to build pyOCD into a single executable GDB server program. * A series of tests in the test sub-folder offers scripts that you may find useful as a foundation for developing custom interaction with the targets over CMSIS-DAP. The GDB server can be launched by running gdb_server.py. This script should be able to detect any connected mbed boards. Here is an example of executing the script from the terminal while a Nordic mKIT is connected: $ sudo python test/gdb_server.py Welcome to the PyOCD GDB Server Beta Version INFO:root:new board id detected: 107002001FE6E019E2190F91 id => usbinfo | boardname 0 => (0xd28, 0x204) [nrf51822] At this point, the target microcontroller is waiting for interaction from a GDB server. This server is running at port 3333 on the development host. You can connect to it from a debugger such as GDB (the client). Here is an example of launching the GDB client: ~/play/demo-apps/BLE_Beacon/Build$ arm-none-eabi-gdb BLE_BEACON.elf GNU gdb (GNU Tools for ARM Embedded Processors) 7.6.0.20140731/rgrover/play/demo-apps/BLE_Beacon/Build/BLE_BEACON.elf... warning: Loadable section "RW_IRAM1" outside of ELF segments (gdb) Notice that we pass the . elf file as an argument. We could also have used the file command within GDB to load symbols from this .elf file after starting GDB. The command set offered by GDB to help with symbol management and debugging is outside the scope of this document, but you can find it in GDB’s documentation. Now, we connect to the GDB server (for ease of reading, we’ve added line breaks in the path); (gdb) target remote localhost:3333 Remote debugging using localhost:3333 warning: Loadable section "RW_IRAM1" outside of ELF segments HardFault_Handler () at /home/rgrover/play/mbed-src/libraries /mbed/targets/cmsis/TARGET_NORDIC/TARGET_MCU_NRF51822 /TOOLCHAIN_ARM_STD/TARGET_MCU_NORDIC_16K //startup_nRF51822.s:115 115 B . (gdb) Now we can perform normal debugging using the GDB command console (or a GUI, if our heart desires). The UART service BLE has a UART service that allows debugging over the BLE connection (by forwarding the output over BLE), rather than through the interface chip. Note: you’ll need an app that can receive the service’s output (the logs). There are many of these; you could try Nordic’s nRF UART. To be able to use the UART service, your app needs: #include “UARTService.h” ... uart = new UARTService(ble); ... and somewhat later ... uart->writeString("some updated\r\n"); Note that: We use writeString(), not printf(). writeString()is defined in the UARTService.hheader and calculates the string’s length for us. We have to prepare the output message. Currently you can only have one BLE connection to the device at any one time, and the UART app used for debugging takes up that connection. For example, if you’re monitoring a heart rate device and receiving output over the nRF UART app, you cannot simultaneously connect to the heart rate device with a standard heart rate app. Sniffers Third-party sniffers can intercept the BLE communication itself and show us what’s being sent (and how). For example, we could see if our connection parameters are being honoured. Sniffing radio activity can now be done with smart phone apps like Bluetooth HCI Logger (for Android). These generate logs that can be analysed with tools like Wireshark. Tip: to learn about the Android Bluetooth HCI snoop log, start here. If you want to use a separate BLE device (not your phone) to sniff the BLE traffic, you can try Nordic’s nRF Sniffer on a Nordic BLE board.
https://docs.mbed.com/docs/ble-intros/en/master/Introduction/Debugging/
CC-MAIN-2018-05
refinedweb
2,522
64.1
How to Extract Words From PDFs With Python Extract just the text you need As I mentioned in my previous article, I’ve been working with a client to help them parse through hundreds of PDF files to extract keywords in order to make them searchable. Part of solving the problem was figuring out how to extract textual data from all these PDF files. You might be surprised to learn that it’s not that simple. You see, PDFs are a proprietary format by Adobe that come with their own little quirks when it comes to automating the process of extracting information from each file. Luckily, we have the right language for the job: Python. Now, I’ve made my love for Python clear. It’s easily readable and has a ton of awesome libraries that allow you to do basically anything. It’s the perfect tool in your utility belt. As I’ve mentioned before, it makes you Batman. What follows is a tutorial on how you can parse through a PDF file and convert it into a list of keywords. Setup For this tutorial, I’ll be using Python 3.6.3. You can use any version you like (as long as it supports the relevant libraries). You will require the following Python libraries in order to follow this tutorial: - PyPDF2 (to convert simple, text-based PDF files into text readable by Python) - textract (to convert non-trivial, scanned PDF files into text readable by Python) - NLTK (to clean and convert phrases into keywords) Each of these libraries can be installed with the following commands inside terminal (on macOS): pip install PyPDF2pip install textractpip install nltk This will download the libraries you require to parse PDF documents and extract keywords. In order to do this, make sure your PDF file is stored within the folder where you’re writing your script. Start up your favorite editor and type: Note: All lines starting with # are comments. Step 1: Import all libraries import PyPDF2 import textractfrom nltk.tokenize import word_tokenize from nltk.corpus import stopwords Step 2: Read PDF file #Write a for-loop to open many files (leave a comment if you'd like to learn how).filename = 'enter the name of the file here' #open allows you that contains all the text derived from our PDF file. Type print(text) to see what it contains. It likely contains a lot of spaces, possibly junk such as '\n,' etc.#Now, we will clean our text variable and return it as a list of keywords. Step 3: Convert text into keywords #The word_tokenize() function will break our text phrases into individual words.tokens = word_tokenize(text)#We'll create a new list that contains punctuation we wish to clean. punctuations = ['(',')',';',':','[',']',',']#We initialize the stopwords variable, which is a list of words like "The," "I," "and," etc. that don't hold much value as keywords.stop_words = stopwords.words('english')#We create a list comprehension that only returns a list of words that are NOT IN stop_words and NOT IN punctuations.keywords = [word for word in tokens if not word in stop_words and not word in punctuations] Now you have keywords for your file stored as a list. You can do whatever you want with it. Store it in a spreadsheet if you want to make the PDF searchable or parse a lot of files and conduct a cluster analysis. You can also use it to create a recommender system for resumes for jobs. I hope you found this tutorial valuable! If you have any requests, would like some clarification, or find a bug, please let me know!
https://medium.com/better-programming/how-to-convert-pdfs-into-searchable-key-words-with-python-85aab86c544f
CC-MAIN-2020-16
refinedweb
605
70.63
(The full set of ParallelExtensionsExtras Tour posts is available here.) Delegates in .NET may have one or more methods in their invocation list. When you invoke a delegate, such as through the Delegate.DynamicInvoke method, the net result is that all of the methods in the invocation list get invoked, one after the other. Of course, in a parallel world we might want to invoke all of the methods in the invocation list in parallel instead of sequentially. The DelegateExtensions.cs file in ParallelExtensionsExtras provides an extension method on Delegate that does just that. public static object ParallelDynamicInvoke( this Delegate multicastDelegate, params object[] args) { return multicastDelegate.GetInvocationList() .AsParallel().AsOrdered() .Select(d => d.DynamicInvoke(args)) } This ParallelDynamicInvoke extension method accepts the target multicast delegate as well as the params array to provide to the delegate invocation, just as with Delegate.DynamicInvoke: public object DynamicInvoke(params object[] args); ParallelDynamicInvoke uses the delegate’s GetInvocationList method to retrieve an array of all of the methods in the invocation list, each of which is represented as a Delegate. Then, PLINQ is used to invoke each individual delegate, returning the last delegate’s result. Join the conversationAdd Comment Would the normal serial invocation also return only the result of the last invocation? It seems odd to throw away all of the other results. Hi Robert- Yes, and those were the semantics I was trying to mimic here for the parallel implementation. Of course, you wouldn’t have to do that. You could, for example, replace the ".Last()" with a ".ToArray()" and you’ll have all of the results returned, not just the last. I like the idea, and I would *love* to use it with events. However, one standard event pattern includes passing the args from event handler to event handler, allowing each to potentially modify the args (for example, setting Cancel or Handled to true). Doing this in parallel introduces problems, since the args are potentially being used concurrently. Introducing locking would remove the benefit of the args unless you were smart about it. Just something to beware of, I suppose. 🙂 Thanks for the comment, Keith. That’s definitely true: if there’s a strict dependency between the delegates, they can’t be invoked in parallel. In most cases with this kind of situation, though, are there typically such real dependencies? I’ve seen event handlers that set Cancel or Handled to true, but I’ve not seen one (at least not that I can remember) that reverts the decision of a previous handler to set it to default of false. This makes sense, since event handlers typically don’t know the order in which they were registered and thus don’t know in what order they’d be invoked. If the directionality of the bit flag being set is just from false to true, then theoretically you could still do these in parallel (especially if the decision of other delegates isn’t exposed, such as through a SetHandled method rather than a get/set property). Regarding Keith/toub: toub is correct in that event handlers are not guaranteed to execute in order, as there is no "order"… the only way to chain events together would be if A.event calls B.EventHandler which fires B.event which calls C.EventHandler… but this could be handled in parallel as any other eventhandler attached to A.event. Quite frankly, I would expect event handlers to be one of the first places to easily take advantage of the Parallels namespace. On that note, I do think it’d make sense for the e.Cancel to somehow cause a Stop or Break in the parallel execution… not sure quite how such semantics would be worked out, but I’m sure given an hour of thinking, a solution would be obvious. Thanks, Scott. And, yes, you should be able to end the parallel execution if the particular property you were interested in on the event arg got set. e.g. public event HandledEventHandler MyEvent; private void RaiseMyEvent() { var eventValue = MyEvent; if (eventValue != null) { var handlers = (HandledEventHandler[])eventValue.GetInvocationList(); var hea = new HandledEventArgs(defaultHandledValue:false); Parallel.ForEach(handlers, (handler,loop) => { handler(null, hea); if (hea.Handled) loop.Stop(); }); } } Stephen, I also noticed that you have the AsOrdered. If you didn't depend on order, I am assuming this can be removed, correct? @Rich Crane: This isn't about the order of the execution of the delegates, but rather about ensuring that their outputs are ordered… that's just so that Last does in fact provide the last one based on the original ordering.
https://blogs.msdn.microsoft.com/pfxteam/2010/04/15/parallelextensionsextras-tour-11-paralleldynamicinvoke/
CC-MAIN-2016-50
refinedweb
764
54.93
Have you ever wanted to play a Pokemon game in your terminal? In this workshop, we'll learn about file and exception handling, random sampling, and code organization! Setting up You can install Python locally on your computer or use repl.it, a free, online IDE, to write the code for this project. Start a new Python file here. Once you have your Python file open, let's import three libraries! import random as rand import time import sys randomwill allow us to sample random Pokemon names, tell us whether we caught a Pokemon or not, and give us randomized time between Pokemon appearances! timewill help us delay the program between Pokemon spawns. syswill help us exit the program before all of the code executes. Next, we need a list of all of the available Pokemon! Luckily, a GitHub user by the name of @cervoise has a list of 721 Pokemon names. You can download the .txt file here. In repl.it, you can copy and paste the contents into a new file. We can open a file with the open() function. The first required argument is the file name, I have the .txt file stored with the name allpokemon.txt in the same directory as the Python file. The second required argument refers to the permissions that we open it with. Since we only need to scrape its contents, we can use the permision "r", signifying "read". Calling readlines() on a file object splits content on different lines as list elements. file = open('allpokemon.txt', 'r') all_pokemon = file.readlines() Now, we'll create an owned Pokemon list which keeps track of the Pokemon we caught. owned_pokemon = [] Pokemon Catching Let's start having pokemon appear! We can force the user to enter a valid response by asking them again (possibly forever) whenever they don't give us the input we're expecting. Since we want a Pokemon to spawn when the loop starts, we can have the program print this information. while True: print('\nA pokemon has appeared!\n') So, what Pokemon actually spawned? We can randomly choose a Pokemon by sampling our list of Pokemon with random.choice(). Once we select the appearing Pokemon, we should also strip() the name. This is because readlines() retains escape characters, so it would leave a newline after its name, which we don't want. To provide emphasis, we can also convert its entire name to uppercase with .upper() (optional). Finally, we can tell the user which Pokemon has appeared before them. current_pokemon = rand.choice(all_pokemon).strip().upper() # \n is an escape character which moves on to the next line. This helps the output stay organized. # Again, whether you want to include this is entirely up to you! print("It's a", current_pokemon + '!\n\n') Intuitively, the user might want to catch this Pokemon. So, we can set up another while True: loop to take care of that. We'll collect user input with input(). We'll also give the user three pokeballs to throw and a 33% chance to catch it. Notice that we make the input lowercase to avoid case-sensitivity. catches_left = 3 while True: catch_state = input('Would you like to catch it? (y/n)') if catch_state.lower() == 'y' or catch_state.lower() == 'yes': pass elif catch_state.lower() == 'n' or catch_state.lower() == 'no': pass else: print('\nValid input is y, yes, n, or no') Let's write some logic. - If we don't have any more catches, we can break. - We'll use random.randint()to get a random number. If it's less than or equal to 33 (33% chance), we can add it to our owned pokemon and break. If we can't catch it, we can subtract one from our catches. - If the user doesn't want to catch it, we can break. while True: print('\nA pokemon has appeared!\n') current_pokemon = rand.choice(all_pokemon).strip().upper() # \n is an escape character which moves on to the next line. I feel like this helps the output stay organized. # Again, whether you want to include this is entirely up to you! print("It's a", current_pokemon + '!\n\n') catches_left = 3 while True: catch_state = input('Would you like to catch it? (y/n)') if catch_state.lower() == 'y' or catch_state.lower() == 'yes': # no more catches left: if catches_left <= 0: print('You weren\'t able to catch the pokemon. ') break else: catch = rand.randint(1, 100) if catch <= 33: owned_pokemon.append(current_pokemon) if owned_pokemon not in owned_pokemon: print('\n\nYou got a new Pokemon!\n\nAdding it to the pokedex...') break else: catches_left -= 1 print('\nIt jumped out, try again!') elif catch_state.lower() == 'n' or catch_state.lower() == 'no': print('Bye', current_pokemon + '!') break else: print('\nValid input is y, yes, n, or no') Lastly, we should give some options to the user (you can always add your own!). - View owned Pokemon - View Pokedex (unique Pokemon) - Save file - Exit - Quit Program We can print out these options first, print('\n\nWould you like to see owned pokemon or your pokedex?\n') print('O = Owned Pokemon') print('P = Pokedex') print('U = Update both to a file!') print('N = Exit') print('S = Exit Program\n') and pull out our trusty while loop for inputs. ( sys.exit() exits the program) while True: choice = input('Pick your choice!') if choice.lower() == 'o': pass if choice.lower() == 'p': pass if choice.lower() == 'u': pass if choice.lower() == 'n': break if choice.lower() == 's': sys.exit() If the user wants to see their owned pokemon, we can print out the owned_pokemon list: if choice.lower() == 'o': print(owned_pokemon) If the user wants to view their Pokedex (unique pokemon), we can plug the owned_pokemon into a set (which doesn't allow duplicate values. if choice.lower() == 'p': print(set(owned_pokemon)) Save File Functionality If the user wants to write to a save file, we can create the new files owned_pokemon.txt and pokedex.txt and write the contents from our created list to it. We'll use the set again for the Pokedex. We can create a new file by using the permission "w"; this creates a new file if it doesn't exist already. If it does, it wipes it's contents. Remember to close the file to save it! if choice.lower() == 'u': print('Working on it!') # create file owned_pokemon_file = open('owned_pokemon.txt', 'w') # for every caught pokemon for i in owned_pokemon: # write it to a file and split pokemon with new lines owned_pokemon_file.write(i + '\n') # close to save owned_pokemon_file.close() # same for pokedex pokedex_file = open('pokedex.txt', 'w') for i in set(owned_pokemon): pokedex_file.write(i + '\n') pokedex_file.close() print('\nDone! View them in your explorer tab.') At this point, we just have to set up a random wait period to spawn a new Pokemon. We can use the function time.sleep(), with rand.randint() to delay the program for a varying amount of time. print('\n\nWait for a new pokemon...') time.sleep(rand.randint(5, 10)) Lastly, we can implement save file functionality. Let's prompt the user for a save file before they start the game using a while loop. If they don't, then we can start them off with an empty inventory and break. If they do, we can ask them for owned_pokemon.txt, the file that we wrote in the "Save File" part of the program. We'll open the file with "r" (read) permissions, and read every item to our list (stripping it to remove escape characters). Once we finish, we can close the file and break. To avoid any exceptions, we can wrap the file opening with a try: except: to catch any errors. If the file doesn't exist, we encounter a FileNotFoundError, and we can tell the user to try again. while True: oldsave = input('Do you have a save file? (y/n)') if oldsave.lower().strip() == 'n': print('\nWelcome!') break elif oldsave.lower().strip() == 'y': print('\nWelcome back! Add your save file path below! (owned pokemon)') own_file = input() try: own_file_start = open(own_file, 'r') owned_pokemon = [pokemon.strip() for pokemon in own_file_start.readlines()] own_file_start.close() print('\nAll set!') break # file doesn't exist except FileNotFoundError: print('We had a problem processing your files, try again or start new!') Done! Let's try running our program. Here's what happens when I run without a save file. I encountered a Toxicroak, and caught it on my second try. When I backed it up to a file, the Pokemon was transferred successfully! You can view the source code on GitHub! Ways that you can hack this workshop are: - Encrypt the saved file - Different Pokeballs Happy Hacking!
https://workshops.hackclub.com/python_pokedex/
CC-MAIN-2022-33
refinedweb
1,435
78.14
Python interface to Dundas rest api. Project description Manage sessions for Dundas. Description Dundas has a very complete REST API. With completeness comes complexity, and this module will help you use the query in an easier way. Why this module is useful It currently does 3 things for you. If you use dundas.Session within a context manager, the context manager wil log you in and out automagically, no matter what happens. You can use the session object as a normal object as well as long as you do not forget to log in and out yourself. Each and every call to the API needs to have the same sessionId parameter. This module creates shortcuts for you for get, delete, to make your life easier. You do not need to repeat the host, api path prefix or sessionId every single time. Some API calls are ported and might have helper methods. I am updating the module based on what I need and use, so I do not expect to have everything ported on my own. Installation Simply with pip, from pypi: python3 -m pip install pydundas or, assuming you do not have permission to store the module globally: python3 -m pip install --user pydundas The module should be able to work with python2 as well, but it is untested and as python2 will be end of life'd in a few months anyway I did not look into it. Examples You can see all the examples in one directory. All the examples below assume a url, user and pwd variables. Happy flow with context manager with Session(user=user, pwd=pwd, url=url) as d: print(d.get('Server').text) Output (example): [{"name":"winterfell","serverGroupId":1,"lastSeenTime":"2019-03-29T09:33:38.880327Z","__classType":"dundas.configuration.ServerInfo"}] When the variable d comes out of scope, so outside the with statement, you will be automagically logged out. Read credentials from a yaml file If you have a yaml file with a user, pwd and url key, then you can read it from pydundas: user: arya pwd: 'valar morghulis' url: winterfell.got from pydundas import creds_from_yaml creds=creds_from_yaml('credentials.yaml') with Session(**creds) as d: print(d.get('Server').text) Exception within the context manager are properly handled with Session(user=user, pwd=pwd, url=url) as d: d.get('you/know/nothing') output: 404 Client Error: Not Found for url: API calls Constant Most constants can be used via their human-readable name. from pydundas import Api, Session, creds_from_yaml with Session(**creds_from_yaml('credentials.yaml')) as d: a=Api(d) c = a.constant() # returns ['STANDARD_EXCEL_EXPORT_PROVIDER_ID'] print(c.getNamesById('679e6337-48aa-4aa3-ad3d-db30ce943dc9')) # returns '679e6337-48aa-4aa3-ad3d-db30ce943dc9' print(c.getIdByName('STANDARD_EXCEL_EXPORT_PROVIDER_ID')) Cube You can warehouse a cube, and get some information about it: with Session(**creds) as d: api = Api(d) capi = api.cube() cube = capi.getByPath('Awesome Project', '/relevant/path') cube = capi.getByPath('DP', '/CustomReports/2daysent/1mailing sendouts') if cube is None: print("Gotcha, no cube named like that.") sys.exit(1) print(cube.json()) print(cube.is_checked_out()) cube.warehouse() print(cube.isWarehousing()) cube.waitForWarehousingCompletion() Health You can run all checks, and fix the failing one: with Session(**creds, loglevel='warn') as d: api = Api(d) hapi = api.health() failings = hapi.check(allchecks=True) print(failings) for f in failings: hapi.check([f], fix=True) Notification You can get a notification by its name and then run it. napi = api.notification() notif = napi.getExactName(name='Awesome notification') if len(notif) != 1: print("None or more than one notification with this name.") sys.exit(1) napi.run(notif[0]['id']) Project For example, to find the ID of a project: from pydundas import Api, Session, creds_from_yaml with Session(**creds_from_yaml('credentials.yaml')) as d: api=Api(d) project = a.project() print(project.getProjectIdByName('DP')) Develop You can either use conda or virtualenv. Most relevant commands are in the Makefile. First edit the first line of the makefile to choose if you want to use conda or virtualenv. # Build an environment with all dependencies make devinit # Tests make pep8 make unittest # Build a package make package # Clean up everything make purge Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pydundas/
CC-MAIN-2020-16
refinedweb
712
58.28
Tips, tricks, and guides for developing on modern Windows platforms When you double-click an .mp3 file you expect your music player to play the song; when you double-click an .xls file you expect Excel to open the spreadsheet. Wouldn’t it be cool to do this in your own Universal Windows Platform (UWP) apps? It’s actually quite easy to do. Follow this tutorial to learn how to launch your app by double-clicking an associated file (and your app can do something with that file of course). This tutorial is suitable for beginners with a little bit of Windows app development experience. We need to do the following: Create a new blank UWP project. Open Package.appxmanifest: 1. Select the Declarations tab 2. Choose File Type Associations from the Available Declarations: drop-down list. 3. Click the Add button. The error (cross in red circle) icons indicate that you have some information to fill in to complete the association. Fill in the relevant details: (see for detailed explanations of all the fields). 1. Type your app’s display name (this shows when the user is asked to choose which app to open a file with). 2. Add a thumbnail image for files associated with your app. You can click the browse button to find an image. 3. Add an info tip to show when the user hovers the mouse cursor over a file. 4. Give the file type a name. This must be in lowercase. 5. Select Open is safe. 6. Type text/plain in the Content type field. 7. Finally, enter the file extension you want to use to launch your app in the File type field (I used .fas – file access sample). Save Package.appxmanifest (Ctrl-S). Now that the app is associated with .fas files, we need to tell it what to do when it is launched by clicking one of those files. Open up App.xaml.cs, and add the following method: protected override void OnFileActivated(FileActivatedEventArgs args) { base.OnFileActivated(args); var rootFrame = new Frame(); rootFrame.Navigate(typeof(MainPage), args); Window.Current.Content = rootFrame; Window.Current.Activate(); } This method automatically executes when the application is launched as the result of double-clicking an associated filetype. FileActivatedEventArgs contains information about the file that was clicked. The method opens an instance of FileActivatedEventArgs to it. Now open Add the following OnNavigatedTo method: protected override async void OnNavigatedTo(NavigationEventArgs e) { base.OnNavigatedTo(e); var args = e.Parameter as Windows.ApplicationModel.Activation.IActivatedEventArgs; if (args != null) { if (args.Kind == Windows.ApplicationModel.Activation.ActivationKind.File) { var fileArgs = args as Windows.ApplicationModel.Activation.FileActivatedEventArgs; string strFilePath = fileArgs.Files[0].Path; var file = (StorageFile)fileArgs.Files[0]; await LoadFasFile(file); } } } Windows.Ui.Xaml.Navigationand System.Threading.Tasksnamespaces. await LoadFasFile(file)line will show an error – because we haven’t implemented this method yet. This code finds the file that was clicked to launch the app (if the app was launched that way), then sends it to the LoadFasFile method, which we’ll create next. To prove that we can do something with the clicked file, let’s load its contents into a TextBlock. Add a TextBlock to MainPage.xaml called “FileContents”. You can paste this line of XAML straight into the main Grid: <TextBlock x: Now add this method to private async Task LoadFasFile(StorageFile file) { var read = await FileIO.ReadTextAsync(file); FileContents.Text = read; } That little method just loads the contents of file into the FileContents TextBlock you added to MainPage earlier. The project is done! Create an ordinary text file and put any text you want in there (I always generate random text at), and save the file with the required extension (.fas if you're following along). Deploy the app to your local machine, then double-click on the .fas file you just created. Your app will launch, and the contents of the clicked file will be displayed in the window: Thanks for this amazing tutorial. But i want to open .mp4 file in media element. please help me. this is a top explained tutorial in all details. specially all little steps to launch the file-app-onfileactivated-navigate to mainpage Thanks a lot it’s really helpful for me for video 😉 For that i was copy file to LocalFolder and set source to MediaElement StorageFolder localFolder = ApplicationData.Current.LocalFolder; await file.CopyAsync(localFolder, file.Name, NameCollisionOption.ReplaceExisting); localSettings.Values[“video_uri”] = file.Name; contentFrame.Navigate(typeof(HomePage)); This is a lifesaver.. Thank you for this very clear and to-the-point tutorial. This works OK if the app is not open. However, if you click on a file when the app is already open it fails. I’ve just tested it, and it works fine for me. I did the following: 1) Double-clicked a test file with extension .FAS (the extension I set up for the app to use). 2) App opened and displayed content from the file. 3) Left the app open and displaying the content of the file. 4) Double-clicked another file with the .FAS extension 5) App opened the second file and displayed its contents. How does it fail for you? What exactly are you doing? Have you changed the code from the blog post at all? Are you on a retail version of Windows 10 (i.e. not an insider preview)? I get error dialog: The remote procedure call failed.
http://grogansoft.com/blog/?p=1197
CC-MAIN-2020-40
refinedweb
899
60.92
Hey... thank you all for your help. I tried to fix my code. Just as a reminder, this was the question: The following program segmant asks the user for his or her first name in main(). Write a function that reverses the name in the array so that main() can print the reversed name. To make it easy, assume that the user name is always five characters long ("Jerry" or "Peter", for example). This is my new code: How come it doesn't work? Not all the characters reverse.How come it doesn't work? Not all the characters reverse.Code:#include <iostream> using namespace std; rev(char name[6]); main() { char name[6]; cout << "What is your first name (5 letters, please)? "; cin >> name; rev(name); for(int a=0; a<5; a++) { cout<<name[a]; } return 0; } //from here is what I added rev(char name[6]) { char temp; temp = name[0]; name[0] = name[4]; name[4] = temp; temp = name[1]; name[0] = name[3]; name[3] = temp; return 0; } Thanks, and looking forward for the answer...
https://cboard.cprogramming.com/cplusplus-programming/67057-passing-values-question-continue.html
CC-MAIN-2017-22
refinedweb
180
82.44
On Fri, 11 Mar 2005, john stultz wrote:> +/* cyc2ns():> + * Uses the timesource and ntp ajdustment interval to> + * convert cycle_ts to nanoseconds.> + * If rem is not null, it stores the remainder of the> + * calculation there.> + *> + */This function is called in critical paths and it would be very importantto optimize it further.> +static inline nsec_t cyc2ns(struct timesource_t* ts, int ntp_adj, cycle_t cycles, cycle_t* rem)> +{> + u64 ret;> + ret = (u64)cycles;> + ret *= (ts->mult + ntp_adj);This only changes when nt_adj changes. Maybe maintain the sum separately?> + if (unlikely(rem)) {> + /* XXX clean this up later!> + * buf for now relax, we only calc> + * remainders at interrupt time> + */> + u64 remainder = ret & ((1 << ts->shift) -1);> + do_div(remainder, ts->mult);> + *rem = remainder;IA64 does not do remainder processing (maybe I just do not understandthis...) but this seems to be not necessay if one uses 64 bit values thatare properly shifted?> + }> + ret >>= ts->shift;> + return (nsec_t)ret;> +}The whole function could simply be:#define cyc2ns(cycles, ts) (cycles*ts->current_factor) >> ts->shift-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2005/3/15/24
CC-MAIN-2014-10
refinedweb
192
54.12
Image Shifting using NumPy from Scratch Image shifting is simply shifting each pixel of the image to a new position. This is a method of pixel shift used in digital cameras to produce super-resolution images. We can think of a pixel as a point in the coordinate axis to be shifted in any direction. When we implement this on all the pixels of the image then we can say the image is shifted. Credits of Cover Image - Photo by James Lewis on Unsplash In this blog article, we will try to shift the image as we shift the point in the coordinate axis completely using NumPy operations. The image is always considered as a 2D plane, hence we shall also consider a 2D coordinate axis having X as the horizontal axis and Y as the vertical axis. The coordinate axis is divided into 4 quadrants namely - - Q1 → Quadrant where both Xand Yare positive. - Q2 → Quadrant where Xis negative and Yis positive. - Q3 → Quadrant where both Xand Yare negative. - Q4 → Quadrant where Xis positive and Yis negative. We assume that our original image to be at origin i.e., (0, 0). To visualize this, we can imagine something like the below - Now, let's say we want to shift the image at coordinates (3, 4). Basically, the origin of the image has to be shifted from (0, 0) to (3, 4) something like the below - Likewise, based on the coordinate points, we need to shift the image. Let's try to understand and implement from scratch using the module NumPy starting from a 2D matrix because images are just large matrices. Time to Code The packages that we mainly use are: - NumPy - Matplotlib - OpenCV → It is only used for reading the image (in this article). import the Packages import numpy as np import cv2 import json from matplotlib import pyplot as plt 2D Matrix We will be creating a 5 X 5 matrix having random numbers. import random mat = [[random.randint(5, 100) for i in range(5)] for j in range(5)] mat = np.matrix(mat) print(mat) [[ 46 13 68 54 12] [ 7 68 32 46 26] [ 46 43 58 27 100] [ 64 59 76 100 41] [ 35 62 56 44 7]] >>> For instance, let us assume that we are shifting the image in Q1, for sure the image has to move left side towards the X axis and topside towards the Y axis. In that case, the size of the image increases. Basically, we are padding the image left side as per the x coordinate depth and the bottom side as per the y coordinate depth. The same has to be replicated when we are shifting the image in the rest of the quadrants Q2, Q3, and Q4. In order to do so, we need to create a padding function using NumPy methods. def pad_vector(vector, how, depth, constant_value=0): vect_shape = vector.shape[:2] if (how == 'upper') or (how == 'top'): pp = np.full(shape=(depth, vect_shape[1]), fill_value=constant_value) pv = np.vstack(tup=(pp, vector)) elif (how == 'lower') or (how == 'bottom'): pp = np.full(shape=(depth, vect_shape[1]), fill_value=constant_value) pv = np.vstack(tup=(vector, pp)) elif (how == 'left'): pp = np.full(shape=(vect_shape[0], depth), fill_value=constant_value) pv = np.hstack(tup=(pp, vector)) elif (how == 'right'): pp = np.full(shape=(vect_shape[0], depth), fill_value=constant_value) pv = np.hstack(tup=(vector, pp)) else: return vector return pv The above function is used to pad the image. The arguments used are as follows: - vector → a matrix in which the padding is done. how → this takes four values that decide the quadrants where the image needs to be shifted. - lower or bottom - upper or top - right - left - depth → the depth of the padding. - constant_value → signifies blackcolor and 0is the default value. Note - For the padding-right and padding-left, we use the method hstack(). In the same way, for the padding-top and padding-bottom, we use the method vstack(). These two are the NumPy methods. - hstack() → horizontal stack - vstack() → vertical stack First, we create a padding matrix whose values are zero. And based on the direction of the shift we make use of these methods. Let's test the above function. 'left', depth=3) print(pmat) [[ 0 0 0 46 13 68 54 12] [ 0 0 0 7 68 32 46 26] [ 0 0 0 46 43 58 27 100] [ 0 0 0 64 59 76 100 41] [ 0 0 0 35 62 56 44 7]] >>>pmat = pad_vector(vector=mat, how= We can clearly see that the function padded the matrix left side with the depth level 3. If we were to plot the same (convert the padded matrix into an image), we get - "off") plt.imshow(pmat, cmap='gray') plt.show()plt.axis( Whereas the original image is - With this, we can conclude the image is shifted to the left side towards the X axis with the x coordinate 3. The same technique is applied to the real image. Let's try to replicate the same for the image. We shall have a function to read the image both in grayscale and RGB format. Let's make another function called shifter() which actually shifts the image along the Y axis irrespective of the quadrant. def shifter(vect, y, y_): if (y > 0): image_trans = pad_vector(vector=vect, how='lower', depth=y_) elif (y < 0): image_trans = pad_vector(vector=vect, how='upper', depth=y_) else: image_trans = vect return image_trans Now that we have the shifter() function, we will need to use this in another function that can shift anywhere in the coordinate axis. Here, we consider X and Y axes. def shift_image(image_src, at): x, y = at x_, y_ = abs(x), abs(y) if (x > 0): left_pad = pad_vector(vector=image_src, how='left', depth=x_) image_trans = shifter(vect=left_pad, y=y, y_=y_) elif (x < 0): right_pad = pad_vector(vector=image_src, how='right', depth=x_) image_trans = shifter(vect=right_pad, y=y, y_=y_) else: image_trans = shifter(vect=image_src, y=y, y_=y_) return image_trans When xand ycoordinates are greater than 0, pad the image left side and bottom side. When xis greater than 0 and yis less than 0, pad the image left side and topside. When xis less than 0 and yis greater than 0, pad the image right side and bottom side. When xand ycoordinates are less than 0, pad the image right side and topside. When xand ycoordinates exactly equal to 0, do not disturb the image. There is one problem yet to translate or shift the image. An image can be of two types - grayscale and colored. For grayscale, there won't be any problem. But for the colored image, we need to separate RGB pixels, apply the shift function, and then finally combine the pixels. Hence the below function. def translate_this(image_file, at, with_plot=False, gray_scale=False): if len(at) != 2: return False image_src = read_this(image_file=image_file, gray_scale=gray_scale) if not gray_scale: r_image, g_image, b_image = image_src[:, :, 0], image_src[:, :, 1], image_src[:, :, 2] r_trans = shift_image(image_src=r_image, at=at) g_trans = shift_image(image_src=g_image, at=at) b_trans = shift_image(image_src=b_image, at=at) image_trans = np.dstack(tup=(r_trans, g_trans, b_trans)) else: image_trans = shift_image(image_src=image_src, at=at)("Translated") ax1.imshow(image_src, cmap=cmap_val) ax2.imshow(image_trans, cmap=cmap_val) return True return image_trans Now that it all set, let's test the above function: For color image translate_this( image_file='lena_original.png', at=(60, 60), with_plot=True ) Clearly, the image is shifted to the origin (60, 60) i.e., in the first quadrant (Q1). For grayscale image translate_this( image_file='lena_original.png', at=(-60, -60), with_plot=True, gray_scale=True ) Clearly, the image is shifted to the origin (-60, -60) i.e., in the third quadrant (Q3). Well, that's it for this article. From this, we tried to understand how the image shifting process is done. Here I take leave. If you have liked it consider visiting this page to read more on Image Processing. And make sure to buy coffee for me from here or just hit the button below.
https://msameeruddin.hashnode.dev/image-shifting-using-numpy-from-scratch?guid=none&deviceId=85a813a8-5ef4-4a48-8a01-93927824604e
CC-MAIN-2021-10
refinedweb
1,337
64.3
Are you sure? This action might not be possible to undo. Are you sure you want to continue? The 50¢ daily Saturday, September 1, 2012 Telling The Tri-County’s Story Since 1869 HERALD Delphos, Ohio Wildcats win in offensive explosion, p6 Delphos Herald will not publish on Labor Day, Sept. 3. There will be an edition printed on Tues., Sept. 4. Upfront Choir selling mums The Jefferson High School choir is selling mums Tuesday through Sept. 13. The nine-inch pots with 15to 16-inch foliage come in red, white, yellow and purple. The cost is $10 each. Contact any choir member or Director Tammy Wirth at any of the school buildings or at twirth@dl.noacsc.org. Pick up will be from noon to 6 p.m. Sept. 21 at the high school. OTTOVILLE PARK CARNIVAL FUN Canal cleanup set Sept. 8 After getting a new school year started, Jefferson High School Secretary Jane Rahrig retired Friday after 20 years at the school. Alex Woodring photo Community Health Professionals of Delphos will hold its annual Hospice “Beacon of Hope” Dinner/ Auction at 6 p.m. on Sep. 26 at the Delphos Eagles. The evening features a meal, silent and live auctions and honoring families served by the local visiting nurses and hospice agency. Raffle drawings for $250, $100 and $50 are also part of the event. Raffle tickets are $1 each or 6 for $5. Dinner tickets are on sale now $20 ea., $100 for a table of six and $140 for a table of eight. Dinner and raffle tickets are available now at 602 E. Fifth St. Proceeds benefit CHP’s patient care fund. As a nonprofit organization, CHP works with patients and families regardless of their financial situation; its patient care fund helps fill the gap for uncovered costs. For more information, call 419-695-1999 or visit comhealthpro.org. Seventy percent chance of showers, storms tonight and 80 percent on Sunday. High in low 80s. Tickets on sale for hospice event The Delphos Canal Commission and the Ohio Divisions of Canals has scheduled a canal cleanup from 8:30-11:30 a.m. on Sept. 8. Organizations and volunteers are asked to register at the Hanser Pavilion in Stadium Park to be assigned a designated area. Residents around the canal are asked to refrain from placing grass clippings and limbs along or inside the canal. All citizens are asked to spruce up the city for the upcoming Canal Days celebration. This year’s park carnival started Friday night with an array of events. Ottoville second grader Hannah Brinkman, above, puts a pie in the face of High School principal John Thorbahn while the Rev. John Stites watches with delight. An adult tricycle race also took place, Kyle Bendele of the Millie’s Cafe Team was one of the many to enjoy acting like a kid again. See page 12 for the schedule of events. Photos by Dena Martz Rahrig retires from high school after 20 years BY ALEX WOODRING Barclay’s leaves fair on high note BY STACY TAFF staff@delphosherald.com LIMA — Eighteen-years-old Jordan Barclay has been showing pigs at the Allen County Fair for three years but this is only his second year showing chickens. He says he was taken by surprise when he won Grand Champion in the Meat Pen Chicken show. “I was really surprised, it kind of caught me by storm but I was really happy,” he said. “This year, I improved a lot on my feeding times and just how well I was taking care of them and keeping them clean. I managed to keep them at a good weight, too, none were over the eight-pound limit.” Barclay says the hardest part of showing is staying calm and making sure everything is ready beforehand. “The difficult part is just making sure the animals are fed, watered and clean and also making sure I’m wearing what I need to wear,” he said. “Another hard part is keeping myself calm and DELPHOS — For twenty years, Jane Rahrig has been the secretary at Jefferson High School. Friday marked the end of her long career at Delphos City Schools. However, the high school is losing more than just a secretary. “She wore many hats,” said Elaine Rode, co-secretary and Rahrig’s replacement. “She is just wonderful and we are going to miss her so much. She does so much for this school.” Rode was not the only one to display affection for Rahrig. The office was a buzz with grateful visitors and co-workers. Guidance Counselor Martin Ross was one of many grateful to have worked with Rahrig. “Jane has been a blessing to me and this entire office and this entire school. She has been a strong support and cares so much about our students. She is the best,’ Ross stated. Forecast Index Obituaries 2 State/Local 3 Politics 4 Community 5 Sports 6-7 TV 11 Summer Fun/World News 12 being patient with the animals so they stay calm. If I’m calm and they’re calm, we work better together.” Once everything is ready for the show, Barclay can enjoy himself. “My favorite part is definitely the show itself,” he said. “I enjoy being able to show the judges what I have, what I can do and how well I’ve taken care of my animals. I like being able to show them what I can bring to the agricultural industry.” Since this is Barclay’s last year at the fair, he’s happy to leave on a high note. “I really don’t think I could’ve done anything better this year because I did the best I could,” he said. “I won’t miss all of the work I had to do but I will definitely miss the atmosphere and getting a chance to see all of the other people showing. I’ll miss just being at the fair.” Barclay will be a senior at Jefferson High School this year. He is the son of Sue and Randy Barclay of Delphos. Delphos moving toward statewide radio system BY MIKE FORD As Rahrig was showered with grateful praises, principal John Edinger reflected on her impact. “She is wonderful,” said Edinger. “Today we lose a part of Delphos Jefferson High School.” Everyone in the office conveyed her positive attitude and persistent smile. “She was great with angry parents and angry students while never complaining,” said Edinger. “She also boosted my moral. She helped me out and made me look good.” A woman of few words on such an emotion-filled day, Rahrig left DJHS with very high regard for her coworkers. “The people here are what have made it so wonderful. Everyone from the students to the staff have made Delphos Jefferson such a great place to work,” said Rahrig. As she gazed at her many flowers that hid her desk she whispered with a smile, “This is a nice place.” mford@delphosherald.com DELPHOS — Across the state, police and fire departments are transitioning from traditional analog radios to a digital system. Under the Multi-agency Radio Communication System, emergency management and first responders in Allen County will be enabled to communicate with colleagues in Cleveland, for example, under the new system. The Ohio system will also be comparable with that of nearby states, so locals will be able to talk with first responders in Indiana if they need to. Delphos has the radios but is one of the very few county entities yet to switch over because it will eventually cost nearly $10,000 per year. Under a federal mandate and (See RADIO page 2) HIGH SCHOOL SCOREBOARD Jeffeson Paulding Ada Spencerville Elida Wapak 63 Carey 34 Bluffton 35 Coldwater 0 Hicksville 21 Col. Grove 14 Allen East 26 Ft. Recovery 30 0 Wayne - Goshen 27 44 Marion Local 44 0 West Jefferson 37 40 48 Versailles 26 Graham Local 7 662 Elida Ave., Delphos Open 5 a.m.-9 p.m. PIZZA also ... • Breakfast Pizza • Dessert Pizza • Cheesy Garlic Bread 419-692-0007 2 – The Herald Saturday, September 1, 2012 Another year, another… Another year has passed and another candle was added to my cake. I don’t mind. I still enjoy birthdays and look forward to them. The alternative is just no good. Not reaching the next birthday, I mean. I looked back on the year and have some regrets, some triumphs, good memories and a few things I wish never happened. It was the usual year. I made my share of mistakes and hopefully learned from each and every one. My only goal is to grow as a person, as a human being, as a wife, as a mother, as a sister, as a daughter and as a friend. Some areas came out OK; others still need work. There’d be no point if we were already perfect when we got here. Where do you go from there? What would be the point? I pretty much learn something new every day. It may not be earth-shattering or a mind-blowing revelation but I learn, nonetheless. I know the capacity for For The Record RADIO (Continued from page 1) technological shifts, however, the transition can be prolonged but not avoided. “We are moving toward it and are meeting next week to talk about it but the ongoing cost is the issue. We’re concerned about that but, at the same time, interoperability is mandated by the Federal Communications Commission and the State of Ohio has already submitted its interoperability plan, Safety Service Director Greg Berquist said. “Allen County went in with some others and got an emergency management grant, so we got the radios for free but there is a cost of use agreement we have to pay. We have 40 radios and it will cost $20 per month for each radio and that will begin in either 2013 or 2014. For now, we only pay the monthly fee when we use them and most of our radios are dormant.” Berquist said the $9,600 will come from the general revenue fund and no new tax will be needed to raise money for the radios. NANCY SPENCER On the Other hand human beings to be kind and cruel never ceases to amaze me. We read about it every day. I just keep hoping the kindness outweighs the other. I know that if we don’t strive to be nice to each other and look out for each other somewhere down the line, it’s trouble for us all. It’s easy to get caught up in the rush of everyday life and responsibilities and let time get away and not stop and do the little things we need to like call our parents or stop by a friend’s home just to chat. We get busy and loved ones hear from us less than either of us would like. We say there’s no time but we have more control over that than we either let ourselves believe or allow. I do know that the years seem to go by pretty quickly and everyone who is older than I am says it only gets worse the more candles that are on that cake. School has started and soon it will be Halloween. Then comes Thanksgiving, Christmas, the new year starts and so on and so forth. We all know how it goes, we just need to learn to slow it down. Well, would you look at that. It’s the day after my birthday already. Only 364 days to go. I’m sure it will be here before I know it. I hope I listen to my own advice and find that balance between the hustle of everyday life and those moments we spend with others that get us through the rest. I know I’m going to enjoy the weekend with my husband and friends and make some of those memories I was talking about. MERICLE, Ronald D., 77, of Lima, funeral services will begin at 4 p.m. Sunday at Chiles-Laman Funeral and Cremation Services’ Shawnee Chapel, the Rev. David Howell officiating. Burial of the cremated remains will be at a later date in Walnut Grove Cemetery. Friends may call from 4-8 p.m. today at and 2-4 p.m. Sunday the funeral home. Memorial contributions may be made to the American Heart Association. FUNERAL The Delphos Herald Nancy Spencer, editor Ray Geary, general manager Delphos Herald, Inc. Don Hemple, advertising manager Tiffany Brantley, circulation manager Vol. 143 No.58 CLEVELAND (AP) — The winning numbers in Friday evening’s drawing of the Ohio Lottery: Pick 3 8-7-0 Pick 4 4-5-5-0 Pick 5 4-2-4-9-0 Rolling Cash 5 14-15-18-21-25 LOTTERY Armstrong mourned as humble hero By DAN SEWELL Associated Press C. Cartel suspect extradited SAN DIEGO (AP) —. Week of Sept. 4-Sept. 7 Delphos St. John’s Monday: No School Tuesday: Corn Dog, Broccoli, Romaine salad, applesauce, fresh fruit, milk Wednesday: Tenderloin sandwich, creamed rice, Romaine salad, pineapple, fresh fruit, milk Thursday: Chicken & Noodles/roll, sweet potatoes, Romaine salad, pears, fresh fruit, milk Friday: Tacos/soft/hard/ lettuce/tomato/cheese/onion, black beans, Romaine salad, strawberries, fresh fruit, milk Delphos City Schools Monday: No School Tuesday: Soft taco, lettuce & cheese, refried beans, carrot stix, Manadrin oranges, milk Wednesday: Cheese pizza, tossed salad, fruit, milk Thursday: Chicken fingers, bread & butter, green beans, pineapple tidbits, milk Friday: Ham patty sandwich, broccoli w/cheese, apple wedges, milk Ottoville Local Schools Monday: No School Tuesday: WG PIzza, Romain blend salad, applesauce, milk Wednesday: Chicken patty on WG bun, tator tots, cooked carrots, pineapple, milk Thursday: Chicken pot pie, breadstick, brownie, mixed fruit, milk Friday: Hot dog-chili dog, WG mac & cheese, green beans, peaches, milk Fort Jennings Local Schools Monday: No School Tuesday: Sloppy Joe Sandwich, corn, shape up, fruit Wed.: Chicken strips, baked beans, dinner roll, fruit Thursday: Chicken quesadilla, green beans, refried beans, fruit Friday: Pizza burger, carrot & celery sticks, pretzels, fruit Landeck Elementary Monday: No School Tuesday: Hot Dog Sandwich, potato round, fruit, milk Wednesday: Breaded chicken nuggets, butter/peanut butter bread, corn, fruit, milk Thursday: Pizza, peas, fruit, milk Friday: Macaroni & cheese, butter/peanut butter bread, lettuce salad, fruit, milk Spencerville Schools Monday: No School Tuesday: Hot Dog Sandwich, spiral fries, applesauce, milk Wednesday: Doritos Taco salad, lettuce & cheese, salsa & sour cream, cinnamon breadstick, peaches, milk Thursday: Ham & Cheese bagel, carrots & broccoli, veggie dip, mini muffin & banana, milk Friday: Salisbury steak, mashed potatoes, gravy, roll, applesauce, milk Elida Monday: No School Tuesday: Cheese bread sticks w/sauce, fresh broccoli w/dip, diced pears Wednesday: Beef soft taco w/lettuce, cheese & salsa, refried beans, cinnamon applesauce, fresh fruit, mini bread stick milk Thursday: Grilled chicken sand., curly fries, diced peaches, fresh fruit, milk Friday: Popcorn chicken w/dip, California blend veggie, applesauce, fresh fruit, brownie bar, milk Phase 1 of a 3-phase project which will reconstruct Interstate 75 from the Auglaize County line to just north of Ohio 81, including the city of Lima. Work on the Good Selection WHY PAY MORE?, drainage work and paving on the ramps. Beginning Tuesday,, ODOT REPORT Bryn Mawr Road from Reservoir Road to Elm Street also closed May 1 until late fall. Traffic on Interstate 75 in the area of the bridge is maintained, two lanes in each direction, with occasional nighttime lanes closures necessary at times. Entrance ramp from Bluelick Road to southbound Interstate 75 will be closed for the day on September 14 for bridge inspection and guardrail repair work. Traffic detoured onto Interstate 75 northbound to Napoleon Road back to Interstate 75 southbound. Ohio 65 from Ohio 115 to Columbus Grove is restricted to one lane through the work zone for a pavement repair and resurfacing project which will continue through October. U.S. 30 from Ohio 65 to Ohio 696 is restricted to one lane through the work zone for a pavement repair and resurfacing proj- ect which will continue through November. All lanes will be open for the Labor Day holiday. Putnam County Ohio 65 at the south edge of Ottawa will be restricted to one lane through the work zone for a project adding turn lanes at the Williamstown Road intersection. Work will continue through midNovember. Ohio 634 between U.S. 224 and Ohio 613. Van Wert County U.S. 30 east of Van Wert will be restricted to one lane through the work zone for pavement and joint repair. Answers to Friday’s questions: Malta won its freedom from England in 1964. Indonesia was the first nation to ever resign from the United Nations. Today’s questions: What was notable about Merlin’s aging process? What New York political figure was Miss America in 1945? Answers in Tuesday’s Herald. Today’s words: Immiscible: things that cannot be mixed, life oil and water Smaragdine: pertaining to emeralds AT McDonald’s RED BOX The Dancer by Gina announces NEW Adult Zumba Classes Place a Classified Ad Call 419-695-0015 ext. 122 to place your ad! TODAY! Get the relief you are searching for at 419-238-2601 or visit Neck Bones The Delphos Herald 419-695-0015 ext. 122 Saturday, September 1, 2012 The Herald –3 Retired teachers meet Sept. 5 BRIEFS The Putnam County Retired Teachers Association will meet at 11:30 a.m. on Sept. 5 at Hillside Winery, 221 East Main Street in Gilboa. Reservations and payment need to be sent by Aug. 25 to Treasurer Charlotte Ellis at 127 East Laura Lane, Ottawa OH 45875. A 50/50 raffle will be held and items for bingo prizes will be collected at the meeting. Educating teens about organ, Convoy readies eye and tissue donation for town festival Donate Life Ohio is proud and language arts. to announce the Together We The Donate Life Ohio Can Save Lives resource kit Educator Resource Kit designed to help high school includes a comprehensive teachers educate students educator’s guide containing about organ, eye everything needand tissue donaed to educate stution. When asked, dents about dona“Do you want tion, a video to register as a and resources donor?” by the DVD. Legacy Bureau of Motor of Life- a Vehicles, teens Story of Teen can then make an Heroes is a informed decidocumentary sion. Donate Life style video Ohio is made up that tells the of the recovery real life story organizations of one Ohio serving Ohio and family’s experiis offering the kits Cathi Arends ence with donato high schools tion. It also prostatewide to enhance class- vides statistics, facts and room learning about dona- medical straight talk about tion. how donation really works. The kit makes it easy to A handout card goes home incorporate donation educa- with students so they can talk tion into lesson plans and about donation with their to meet school district edu- families. cational objectives. Donation Movies, TV and the intercan easily be tied to a variety net often portray donation of subjects including biology, incorrectly. Because teens are health, math, statistics, con- a target audience for these sumer sciences, government media outlets, it is important for them to learn the facts about donation. An individual, 15 1/2 or older, with a driver’s license, permit or state ID can join the Ohio Donor Registry and give legal authorization for organ and tissue donation upon death. For minors between 15 1/2 and 18 years of age, parents can revoke or amend the donation at the time of death. Because parents play such a critical role in the donation process, it is vitally important that students discuss their decision with their parents. Ohio high school educators are encouraged to contact the community education staff for Donate Life Ohio to schedule a visit to their school. Go to to schedule a free, non-persuasive program for students. Convoy Community Days will be celebrated September 21-23, 2012 at the Convoy Edgewood Park. Many events are being planned for the weekend festival for all ages. Shuttle service will be offered on Saturday beginning at 3 p.m. at the back of the village parking lot at Sycamore Street with pickup and drop-off every half hour at the Park Building until 5 p.m. Log onto villageofconvoy.com for a complete schedule of events. Lions Club bingo Friday and Saturday night “Bingo in the Building” from 6:309 p.m. Convoy fire and EMS pancake and sausage breakfast Saturday from 7-9:30 a.m. at the fire station. CoEd and men’s softball tournament Men’s will be held on Saturday and Coed on Sunday at the Convoy Edgewood Park Ball Fields. Contact Tim Bolenbaugh at 419-749-2525 for more information. Children’s activities include: Kiddy tractor pull in the parking lot at 10 a.m. Saturday for ages 3-8, First Aid for Little People at 10:00am in the tent, kids games by Convoy Pre School beginning at 10 a.m. in the tent on the tennis court, Barrel Train Express rides, pony rides 3:30-6:30 p.m. Children’s wiffleball tournament for ages up to 15 years old, all day on Saturday. Contact Steve Richardson for more information @ 419-513-1147 “Crestview: Carrying on a winning tradition” will be the theme for the parade on Saturday at 2 p.m. Crestview Lady Knights will be the Parade Grand Marshals, winning the 2012 State Softball Championship. Line up will be at the Crestview Schools at 1 p.m. Registration forms are available at villageofconvoy.com. STATE/LOCAL Miami dean says he was out of job OXFORD (AP) —. Ottoville Park Carnival Judge restores 3 early voting days COLUMBUS . Crowds at Friday night’s festivities at the Ottoville Park Carnival bet on the “money wheel” under a tent and temperate weather conditions. Photo by Dena Martz Mark’s Ark Animal Show will take place Saturday at 4 p.m. on tennis court. Mark is an expert animal handler. His critter knowledge matched with humor and wit make for an exciting one hour program. A few of the critters that will be visiting Convoy Community Days are snakes, lizards, tortoises, hedgehog, some creepy crawlys, toad, birds and frogs. Truck pull Saturday at 6 p.m. Check out the web page for rules and regulations. Contact C.W. Harting at (419)-203-2117 or Emmett Minnich at (419)749-4135. Corn hole tournament will take place Saturday with registration at 2:30 p.m. at the truck pull track at the Edgewood Park. Entry Fee is $20 per 2 person team, Double elimination. Two division “Under 18” and “open”. 2 awards per division. For questions contact Linda Clay 419-203-3546. 5K run/walk will be Sunday. Registration at 12:30 p.m. with the race at 1:30 p.m. It will be a 3.1 mile loop that starts and finishes at Edgewood Park. Trophies for top 2 overall male and female runners. Medals for top 2 in each age group. Pre-registration $12 with a T-shirt by or $10 without a T-shirt. Call Cary Mathew at (419)749-2651. A reverse raffle will be held on Sunday at the Edgewood Park Community Building beginning at 1 p.m. with a Grand Prize is $850.00. Only 200 tickets will be sold. You need not be present to win, except to win “Double” if Present at drawing. Tickets are $15.00 each an may be purchased from the Tavern, Secret Garden Floral & Gifts, The SophistiCut or the Convoy Village Office, Thatcher Kulwicki Insurance Agency, Knight Pizza & Remdy Sports Bar & Grill or call Amy Mengerink at 419-749-4007. Ohio could spend $3.5M on animal facility CHECK US OUT ON THE WEB... By MITCH STACY Associated Press 1st death linked to new swine flu is Ohioan, 61 COLUMBUS . pete schlegel for state representative A Resident of the 82nd District of Ohio COLUMBUS —.” New Customer Special!! r CALL fo te ee quo re! fr pa & com • Residential • Agriculture • Commercial • Motor Fuel • Portable Cylinders filled on-site Pre-Buy & WE SEL L FREE TAX SCHOOL Earn extra income after taking course. Flexible schedules, convenient locations. Budget Plans LocaLLy owned & Available operated since 1957 GAS GR ILLS coMpetitiVe prices! 10763 U.S. 127 South Van Wert, Ohio 419-238-2681 460 W. Fourth Street Ft. Jennings, Ohio 419-286-2775 Liberty Tax Service Small fee for books. Register now! Courses start Sept. 13 Call. 419-229-1040 Jill Miller, DDS Steven M. Jones, DDS Welcome the association of the independent voice! Farm Bureau Endorsed • Facebook • /peteschlegel Paid for by committee to elect pete schlegel state representative Rodney (Rod) Mobley, treasurer, 13122 Rd. 87, Paulding, Ohio 45879 General Dentistry Andy North . Financial Advisor 1122 Elida Avenue Delphos, OH 45833 419-695-0660 NEW PATIENTS WELCOME Located on S.R. 309 in Elida Joe Patton, DDS myddsoffice.com 419-331-0031 Member SIPC daytime, evening and weekend hours available. 4 — The Herald Saturday, September 1, 2012 POLITICS “The most dangerous of all falsehoods is a slightly distorted truth.” —Georg Christoph Lichtenberg, German scientist (1742-1799). IT WAS NEWS THEN One Year Ago • The Van Wert County Fair opened Wednesday for the 155th time, keeping alive many of the familiar traditions and beginning a few new ones. This fair features a newer, shorter schedule. “We’ve shortened the fair by two days, taking the last two days off of it,” explained Fair Board President Dave Evans. 25 Years Ago — 1987 • Salem Presbyterian Church of Venedocia will sponsor its annual gymanfa ganu Sept. 6. This year’s festival of song will be directed by Bartle Jones from St. Louis, Mo., formerly of Venedocia. An active program of handbell ringing choirs was begun by Bartle when a set of 24 bells was given to the Venedocia Church by Bess (Morgan) Wilson. • Mr. and Mrs. Robert Bigelow, Sr. of Fort Jennings, attended the 69th national convention of the American Legion last week in San Antonio, Texas. Bob presently is the second division commander of the first district of Ohio and was selected as a delegate to represent Ohio at the convention. • Spec. 4 Michelle L. Wagner, daughter of Sandra L. and David L. Wagner of Delphos, has been decorated with the Army Achievement medal in South Korea. Wagner is a data telecommunications operator with the 275th Signal Company. She is a 1985 graduate of Jefferson High School. 50 Years Ago — 1962 • Norman Schulte of Fort Jennings, was one of 11 young men who recently was graduated from the Ohio Highway Department technician training program. The 11 were assigned to highway department work Monday. Schulte and five others were appointed to Division One, Lima. The technician training program trains young men in algebra, trigonometry, engineering drawing, surveying, highway material and professional ethics. • Plans to keep the municipal swimming pool open on Saturdays and Sundays throughout the month of September were announced by Calvin Fox, recreation director. Pool patrons are asked to come to the pool dressed in their swimming garments because the dressing rooms will be in use by the local football teams. • As a result of semi-final wins Thursday night, Tom & Lou will meet German’s Shell for the championship of the third annual local invitational slo-pitch tournament. Tom & Lou blanked Ottawa Bert’s Bar, 9-0 and German’s Shell topped Lima UAW 1219, 9-3. Tom & Lou is the league, city tournament and district tournament winner, and German’s Shell is the defending Delphos invitational tournament winner. 75 Years Ago — 1937 • Gramm Motors, Inc. of Delphos, is enjoying a good business at the present time and deliveries are being made almost daily to many places in the United States and foreign countries. This week four trucks will be shipped to Nigeria, West Africa; two to Bangkok, Siam and two to Bogota, Columbia, South America. Last week four trucks were shipped to Yugoslavia. • Miller’s Opticians were acclaimed the softball champions of Delphos Tuesday night by virtue of their 5 to 4 defeat over Coombs Shoes. It is the second straight year that Miller’s have won the city championship. W. Briggs was on the mound again Tuesday night. He gave up seven hits during the contest and issued no free passes. • Veronica King entertained the members of the DeltaGame-a Bridge Club and one guest Tuesday night at the home of her aunt, Clara Eickholt, West Third Street. In bridge, Mrs. Alfred Weisgerber held high score and Helen Stallkamp was second high. In two weeks, Mrs. Donald Imber, State Street, will receive the club into her home. Welcome to one of the last vestiges of summer – Labor Day Weekend. This weekend has a long history of grunts and groans as schools are back in session, community pools are closed for the season, and many are trying to get in just a few more rays to keep that summer tan. Many businesses are closed today through Monday, including the post office. The history of labor day and the connection of postal employees and their unions have played a very interesting role in shaping the United States Postal Service once referred to as the Post Office Department. The national holiday of Labor Day was voted into law in June of 1894. Prior to that date, several state legislatures had taken the stance that it should be a time to celebrate the accomplishments of the American worker and the roles played by the labor union organizations. The vital force of labor has brought America a high standard of living and has brought us closer to the realization of our traditional ideals of economic and political democracy. It is appropriate, therefore, that the nation pays tribute on Labor Day to the creator of so much of the nation’s strength, freedom, and leadership — the American worker. During the recent budget constraints placed on the United States Postal Service, several pieces of legislation that if enacted, would significantly curtail the role played by postal unions. That role was defined following the postal workers strike in 1970 by the Postal Reorganization Act which Richard M. Nixon signed into law on July 1, 1971. On that same date the American Postal Workers Union (APWU) was founded. This labor organization was the result of a merger of five postal unions. The United Federation of Postal Clerks, the National Postal Union, the National Associate of Post Office and General Service Maintenance Employees, the National Federation of Motor Vehicle Employees and the National Association of Special Delivery Messengers. In 2007, the National Postal Professional Nurses labor organization merged with the APWU. The most significant provision of the reorganization act instituted collective bar- gaining between the USPS and its affiliated unions. I believe the following excerpt from an opinion expressed by the Supreme Court of Canada defines how labor unions have felt concerning the role of collective bargaining: .” But the role that unions have played with the Post Office Department date back almost 100 years prior to the reorganization act. In 1863, Cleveland Ohio saw the begin- nings of city delivery by letter carriers. The organization to represent these employees was formed twenty six years later. It was known as the National Association of Letter Carriers. In 1896, the post office began an experiment with rural delivery with just five routes. Seven years later, there were 15,119 routes nationwide. These employees would be represented by the National Rural Letter Carrier Association which was formed in 1903. 1912 saw the formation of the National Mailhandlers Union which represents another 48,000 employees of the 560,000 on the roles today. So enjoy your Labor Day weekend even though it usually means the beginning of the end of summer. Take this opportunity to grab one of the few remaining seats for our next excursion to New York City. All plans need to be finalized by the end of next week, so don’t be left out. Spend five glorious days enjoying the greatest city in the world – the one that never sleeps. Call 419-303-5482 for more information. Survivors shouldn’t feel guilty about loved one’s suicide567-HOPE at any time, and the We Care Crisis Center, located on 797 S. Main St. in Lima, is open 24/7, no appointment necessary. The Summit on Suicide Awareness and Prevention JUST A THOUGHT by Sara Berelsman www. wecarepeople.org.. http:// TAMPA, Fla. —/Ryan: Who, me? KATHLEEN PARKER Point of View every- thing Saturday, September 1, 2012 The Herald – 5 COMMUNITY LANDMARK Clark Mansion Van Wert CALENDAR OF TODAY 9 a.m.-noon — Interfaith Thrift Store is open for shopping. 1-3 p.m. — The Delphos Canal Commission Museum, 241 N. Main St., is open. MONDAY Happy Labor Day! TUESDAY 11:30 a.m. — Mealsite at Delphos Senior Citizen Center, 301 Suthoff Street. 7 p.m. — Delphos City Council meets at the Delphos Municipal Building, 608 N. Canal St. Delphos Coon and Sportsman’s Club meets. Al-Anon Meeting for Friends and Families of Alcoholics at St. Rita’s Medical Center, 730 West Market Street, Behavioral Services Conference Room 5-G, 5th Floor 7:30 p.m. — Alcoholics Anonymous, First Presbyterian Church, 310 W. Second St. Vonderembses hold 41st family reunion Photo submitted EVENTS The 41st Ed and Nellie Vonderembse family reunion was held recently at Waterworks Park. Among those attending were Dr. Charles and Sheila Vonderembse of Columbus; Cindy, Matt, Allie, and Lindsay Kostoff, David, Maria and Zach Powley of Fort Wayne; Andy and Carolyn Vonderembse, Ricky Munoz, Michelle and Landon Smythe, Jeremy, Talynn, Callie and Kane Garber of Van Wert; Norma Vonderembse, Bob and Donna Holdgreve, Chrissy, Hannah and Halle Elwer of Delphos; Mike and Beth Mathews of Dayton; Marie Vonderembse, Georgine Vonderembse and Janet Knupp of Lima. Also attending were Greg and Becky Touchman of Westerville; Rob, Marsha and Jordan Winkler of Union; Ellen and Erin Drwal of Seven Hills; Scott and Vicky Vonderembse of Fort Jennings; Joanne and Austin Horstman, Vincent and Carol Verhoff of Kalida; and Mark and Gayle Vonderembse of Perrysburg. Next year’s host will be Georgine Vonderembse of Lima. The Humane Society of Allen County has many pets waiting for adoption. Each comes with a spay or neuter, first shots and a heartworm test. Call 419-9911775. PET CORNER Happy Birthday SEPT. 2 Chandler Clarkson Kim (Kohorst) Bickford Michael Grubenhoff Megan Tracy SEPT. 3 Sherrie Looser Caitlin D. Redmon Mike Minnig Patrick Kundert Russell Craig SEPT. 4 Hayley Jettinghoff Scott Siefker Karen Sendelbach Sarah Stemen Rose Moore Kurt Bonifas Todd Rittenhouse Katherine Watkins Madison Jettinghoff Michelle Lindeman Miko has a sweet disposition to match his sweet face. He’s a gentle giant with a large head that’s perfect for petting. He walks well on a leash and knows how to sit on command. If you like a big, solid dog with a calm demeanor for his age, Miko is your guy. The following animals are available through the Van Wert Animal Protective League: Cats M, F, 7 years, fixed, front dew clawed, grey, longhaired tiger F, 1 year, fixed, front dew clawed, black, long haired, named Lily M, 5 years, fixed, gray, Nellie is a black kitty who is playful and sweet. She needs a home where she can take her time getting to know the family. She is a gorgeous house panther that needs a gentle home to feel safe and loved in. name Shadow F, 1 year, gray tiger Kittens M, F, 3 months, black with white spots, black and white, fray tiger, rusty, calico tiger M, 6 months, orange and white, name Ziggy Dogs Blue Healer Beagle, F, 3 years, fixed, name Sadie 50th Annual Ottoville Park Carnival st “Always Labor Day Weekend” Friday, August 31 , Saturday, September 1st & Sunday, September. 2nd FREE LIVE ENTERTAINMENT FRIDAY, AUGUST 31st 9:00 p.m. to midnight Ohio’s Finest Live Rock Party Band Brother Believe Me 50’s & 60’s Dance Tractor Square Dancing 8:00 p.m. to 11:00 p.m. SATURDAY, SEPTEMBER 1st SUNDAY, SEPTEMBER 2nd 4:00 p.m. to 7:00 p.m. Downtown Delphos Polly Mae 9:00 p.m. to midnight Delphos Animal Hospital Saturday, Sept. 22 • 1-4 p.m. at Delphos Animal Hospital 1825 E. Fifth St. • 419-692-9941 In Celebration of our 25th Anniversary is proud to sponsor a PET ADOPTATHON Are you looking for a pet? We want to “give back” to those who give so much to animals and people. Plan to attend our 25th anniversary celebration and help us find homes for 25 pets in need. kid-friendly meals for children whose primary source of food is the school cafeteria. Meals ‘til Monday provides nutritional, Learn more about and donate to these important organizations that will be in attendance at our PET ADOPTATHON. Humane Society of Allen County’s goal is to find loving, lifelong homes for Allen County’s homeless animal population. children through horseback riding and horse related activities that promote physical, emotional and mental development. Challenged Champions Equestrian Center supports special needs adults and Deb’s Dog Rescue depends on donations and adoption fees to fund veterinary care. Deb cares for and places animals that have been neglected, abused or injured. forcing dog control laws in a consistent and efficient manner, always sensitive to the rights and welfare of Allen County residents as well as the humane treatment of dogs. Allen County Dog Control Department (Dog Pound) is in charge of en-, September 1, 2012 ‘Cats run over Panthers in NWC opener The Delphos Herald DELPHOS – Zavier Buzard ran for 385 yards on 34 carries and Quinten Wessell added 162 yards on 14 tries as the Jefferson Wildcats rolled to a 63-34 win over Paulding in Northwest Conference action at Stadium Park Friday night. The Wildcats took control of the game on their opening possession and never looked back. Jefferson finished the night with 545 rushing yards and 710 total yards of offense in moving to 2-0 overall and 1-0 in the Northwest Conference. “I have to give our offensive line a lot of credit,” noted Wildcat head coach Bub Lindeman after the contest. “We felt like we had an advantage with our whole line back from last year and they did a good job of going out and doing exactly what we had hoped. They did a great job of blocking up front and then our running backs exploded through the holes.” Buzard finished the night with five touchdown runs while Wessell added two more. It was a dominating performance offensively for the Wildcats. “We feel like we have two very good running backs,” Lindeman continued. “Wessell is tough to bring down and runs so hard as a fullback and then Buzard is a very good tailback. But it all comes down to the play up front and they did a great job tonight.” The Wildcats ran their first offensive series and marched 84 yards in 11 plays, consuming 5:22 off of the clock before scoring the game’s first points on a 3-yard touchdown run from Buzard. “I thought we came out and established our running game right away and that is who we are,” noted Lindeman. “We want to be able to run the ball and we were aggressive in doing so tonight.” Jefferson widened the margin to 14-0 early in the second stanza. Buzard capped a 7-play, 66-yard drive with an 18-yard scamper at the 10:38 mark. Paulding would get on the scoreboard on its ensuing possession. Taking advantage of a 43-yard kickoff return by James Brown to go along with a Wildcat facemask penalty, the Panthers started their drive at the Jefferson 35. SPORTS Jefferson senior fullback Quinten Wessell goes through a Paulding Panther on this gain down the sideline in the second period Friday night at Stadium Park. In a Wildcat offensive explosion, Wessell ran for over 100 yards as the Wildcats pummeled the Panthers 63-34. Seven plays later — facing a fourth-and-goal at the 1 — quarterback Julian Salinas hooked up with Javier Gonzales for the toss to make it 14-7. But the Wildcats weren’t done. On the ensuing possession, junior signal-caller Austin Jettinghoff found junior Tyler Mox wide open in the corner of the end zone for a 21-7 Wildcat advantage with 3:54 left in the half. Jefferson then pushed the margin to 28-7 with 43 seconds left in the second quarter. Jettinghoff hooked up with Ross Thompson for a 53-yard scoring toss that turned all momentum to the Wildcats’ favor heading into the locker room. “We felt like we had really started to wear them down at halftime,” added the Wildcat mentor. “They appeared to be tired and our offense was able to move the ball consistently.” The second half was a scoring fest for both teams, with the two squads combining for 62 points in the final two quarters. Wessell broke free for a 51-yard touchdown run at the 5:40 mark as Jefferson widened the margin to 35-7. Paulding answered with a 46-yard touchdown pass from Salinas to Logan Doster with 3:17 left in the quarter before Jefferson responded. On the Wildcats’ first play from scrimmage of the ensuing drive, it was Buzard who scampered 69 yards to put the home team back on top 42-14. The junior tailback then added another score with 20 seconds left, sprinting away from the Panther defense for a 64-yard touchdown run that gave Jefferson a 49-14 advantage. James Brown returned the ensuing kickoff 86 yards for a Panther touchdown but the Wildcats weren’t done. Buzard picked up an 18-yard run for a score with 10:22 left in the contest for a 56-20 Jefferson lead. Salinas connected with Lance Foor on a 3-yard touchdown pass to get Paulding back within 56-28 before Jefferson picked up its final score of the night. Wessell plunged in from a yard out to wrap up the Wildcat scoring with 3:26 left. Salinas picked up his fourth touchdown pass of the contest on a 28-yard completion to Gonzales to round out the scoring. “It’s always good to start out league play with a win but we have to come back ready to play next week,” concluded Lindeman. Jettinghoff finished the night 7-of-10 through the air for 165 yards. Thompson recorded five receptions for 129 yards to lead the Wildcats. Salinas ran for 98 yards on 15 carries to lead the Paulding ground attack, with Doster adding 69 yards on nine tries. Salinas also was 13-of-28 through the air for 160 yards. Foor picked up four receptions for Paulding while Gonzales and Kaleb Hernandez had three each. Paulding returns to Northwest Conference action Tom Morris photo Grothaus paces Grove past Mustangs on gridiron By DAVE BONINSEGNA The Delphos Herald zsportslive@yahoo.com COLUMBUS GROVE — It hasn’t taken new Columbus Grove football coach Andy Schafer very long to get comfortable in his position at the helm of the Bulldog football squad; just two games into his tenure, he has seen his troops put up two Ws. For the second consecutive week, Grove quarterback Collin Grothaus led the hosts to a victory. Grothaus threw for three touchdowns and ran for two, while picking off two passes, as the Bulldogs ran away with a 48-26 Northwest Conference victory over the Allen East Mustangs Friday at Clymer Stadium. Grothaus rushed the ball 14 times for 131 yards and threw for 259 yards. Dakota Vogt had two touchdowns, one rushing and one from the air; Blake Hoffman and David Bogart both found the end zone for the home team. The Mustang effort was led by Ross Stewart; he ran the ball 20 times for 130 yards and two touchdowns. Allen East quarterback Casey Crow completed 6-of-14 passes, two for touchdowns. The contest started much like the ’Dogs’ battle in week one with Pandora-Gilboa — with the guests scoring on their first possession and the Bulldogs answering right back. In the first four possessions of the contest, each team scored twice, with Stewart driving the ball in from a yard out for the Allen East first score and Hoffman answering back on just the second play from scrimmage for the Bulldogs with 61-yard pass from Grothaus. After a 2-point conversion, the hosts were up 8-7. The Mustangs answered the bell on their next touch, a 5-play drive capped off by a Matt Schuey 13-yard reception; however, the extra point was no good, making it a 13-8 contest. It wouldn’t take long for Columbus Grove to answer back — four minutes and five plays later, Vogt found his way into the end zone from six yards out. After tacking on another 2-point conversion, the Bulldogs were up three at 16-13. It appeared that the defenses had figured out the opposing offense as the next two possessions by each team resulted in punts but the Mustangs came up with another big play as Crow connected with Nick Kohlrieser in the corner of the end zone to complete a 37-yard connection and give the visitors the lead back at 20-16 with 10 minutes left before the break. Nevertheless, as they had done earlier, the Bulldogs answered the call and came right back. After a holding penalty appeared to snuff out their drive, Bogart got hold of a 60-yard pass from Grothaus and with a completed 2-point conversion, the Bulldogs regained the lead at 24-20. Delphos Jefferson 63, Paulding 34 Scoring Summary: Jefferson – Zavier Buzard 3 yd. run (Austin Jettinghoff kick), 4:15 1st. Jefferson – Zavier Buzard 18 yd. run (Austin Jettinghoff kick), 10:38 2nd. Paulding – Julian Salinas 1 yd. pass to Javier Gonzales (Tyler Ash kick), 7:22 2nd. Jefferson – Austin Jettinghoff 19 yd. pass to Tyler Mox (Austin Jettinghoff kick), 3:54 2nd. Jefferson – Austin Jettinghoff 53 yd. pass to Ross Thompson (Austin Jettinghoff kick), :43 2nd. Jefferson – Quinten Wessell 51 yd. run (Austin Jettinghoff kick), 5:40 3rd. Paulding – Julian Salinas 46 yd. pass to Logan Doster (Tyler Ash kick), 3:17 3rd. Jefferson – Zavier Buzard 69 yd. run (Austin Jettinghoff kick), 2:58 3rd. Jefferson – Zavier Buzard 64 yd. run (Austin Jettinghoff kick), :20 3rd. Paulding – James Brown 86 yd. kickoff return (run failed), :04 3rd. Jefferson – Zavier Buzard 18 yd. run (Austin Jettinghoff kick), 10:22 4th. Paulding – Julian Salinas 3 yd. pass to Lance Foor (Julian Salinas pass to Lance Foor), 8:23 4th. Jefferson – Quinten Wessell 1 yd. run (Austin Jettinghoff kick), 3:26 4th. Paulding – Julian Salinas 28 yd. pass to Javier Gonzales (pass failed), 1:07 4th. Team Statistics: Paulding Jefferson First Downs 14 19 Rushing Attempts – Yards 27-197 50-545 Passing Yards 160 165 Total Offense 357 710 Pass Completions – Attempts 13-28 7-10 Had Intercepted 1 0 Fumbles – Lost 1-0 0-0 Penalties – Yards 2-10 6-58 Individual Rushing: Paulding – Julian Salinas 15-98, Logan Doster 9-69, James Brown 3-30 Jefferson – Zavier Buzard 34-385, Quinten Wessell 14-162, Kurt Wollenhaupt 2-(-2) Individual Passing: Paulding – Julian Salinas 13-28-160 Jefferson – Austin Jettinghoff 7-10-165 Individual Receiving: Paulding – Lance Foor 4-27, Javier Gonzales 3-38, Kaleb Hernandez 3-37, Logan Doster 1-46, Steven Strayer 1-4, James Brown 1-8 Jefferson – Ross Thompson 5-129, Tyler Mox 1-19, Drew Kortokrax 1-17 on Friday as the Panthers welcome in Lima Central Catholic. Jefferson also stays in league play as the Wildcats make the trip to Bluffton. Bulldogs dominate both sides in shut out of Bearcats By JIM METCALFE jmetcalfe@ delphosherald.com SPENCERVILLE — Ada has built a strong offensive legacy in the last decade-plus under head football coach Micah Fell. The Bulldogs showed they could flex some defensive muscles as well, thoroughly dominating both sides of the line of scrimmage in shutting out host Spencerville 35-0 Friday night in Northwest Conference grid action at Charles Moeller Memorial Field. Perhaps they had extra motivation as Coach Fell was mourning the death of his mother earlier this week. The defense held the potent Bearcats’ running game to 118 yards on 40 tries, while the offense piled up 482 yards. “The kids knew their head coach was hurting and they stepped it up for him. They played with so much heart tonight,” Ada defensive coordinator Frank Crea explained. Ada got the ball first and started marching immediately behind their no-huddle, shotgun, up-tempo spread offense. However, Spencerville got a break as senior Hunter Patton picked off an overthrow by senior quarterback Mason Acheson (20-of-27 passing, 310 yards, 3 picks, 4 scores), setting the hosts (1-1, 0-1 NWC) up at the 43. They seemed to have all the momentum and drove to the Bulldog (2-0, 1-0 NWC) 37 but on 4th-and-2, junior back Anthony Schuh (11 rushes, 33 yards) was stopped a yard short. “Credit the Ada defense and coaching staff; they played a great game. They simply out-physicaled us and that surprised me,” Spencerville coach John Zerbe said. “They had too much penetration all night and we couldn’t get anything going at all. They owned both sides of the line of scrimmage; hats off to them. They took away what we do best and that’s run the ball.” The visitors marched quickly — three plays, in fact — to strike. At the Bearcat 44, Acheson looked left and threw for junior Matt Wilcox (5 grabs, 97 yards) on the right numbers at the 30. He cut all the way across the field in eluding the defense to paydirt. Junior Hunter Waller kicked the extra point and the visitors led 7-0 with 6:29 showing in the first. Fell gave credit where it was due. “Coach Crea put together a great game plan starting Monday and the kids executed it flawlessly,” Fell continued. “We’ve played this 7-man front before — we have one linebacker to clean everything up. We wanted to make them throw to beat us.” Ada again started move the ball on its next possession — starting at the 42 — but senior Dan Settlemire picked off Acheson at the goal line and returned it 17 yards. However, it was another 3-and-out. Ada commenced at its 45 and took another quick drive — three plays again — to strike again. After a procedure call (9 penalties, 72 yards) set them back, Acheson took the snap from the 41 and found senior running back Kellen Decker (4 grabs, 77 yards; 19 rushes, 91 yards) on a screen pass to the left side. He got great blocking to the sideline and turned on the jets, pulling through an arm tackle at the 20 and ending up in the end zone. Waller’s conversion made it 14-0 with 35 ticks showing in the opening period. Spencerville gained a first down — an 18-yard pass from senior signalcaller Derek Goecke (1-of-7 passing) to senior tight end Dominick Corso — but again called upon sophomore punter Logan Vandemark (7 times for the game) to boot it away. Ada punted for its only time on its next sequence, as did the Bearcats. Quotes of local interest supplied by EDWARD JONES INVESTMENTS Close of business August 31, 2012 Description Last Price 13.090.84 3,066.96 1,406.58 361.64 63.65 44.72 42.06 52.48 42.99 45.55 29.71 16.64 16.28 9.34 65.51 21.35 12.20 58.44 56.75 31.89 6.59 67.43 37.14 52.20 28.48 89.49 30.82 72.43 67.19 1.19 4.85 41.55 33.41 9.17 42.94 72.60 STOCKS Change +90.13 +18.25 +7.10 +1.77 +0.21 +0.22 +0.15 -0.04 +0.19 +0.35 +0.06 +0.05 +0.07 +0.03 +0.19 +0.24 +0.42 +0.13 +0.13 -0.13 +0.05 +0.22 +0.24 -0.25 +0.30 +0.79 +0.50 +0.3 +0.31 0 +0.02 +0.07 +0.12 0 +0.17 +0.35 Columbus Grove wasn’t done with the scoring in the first half; the ’Dogs struck again with just over a minute left on a 52-yard completion from Grothaus to Vogt. Vogt caught the pass on the Allen East 45 and powered his way to paydirt, giving the home team a 30-20 advantage going into the break. The Bulldogs got the ball to start off the second half and wasted no time in putting up six more on the board. A 44-yard Vogt run, with help from a 15-yard penalty against Allen East, set things up for a 2-yard jolt into the end zone by the Grove quarterback; just like that, the hosts were in control with a 36-20 advantage. Another Stewart 1-yard run for the Mustangs cut things down to 10 but the Bulldogs kept piling on and Hoffman added the final two scores for Grove: a 44-yard touchdown catch coupled with a 42-yard run. The Bulldogs moved their record to 2-0 (1-0 NWC) on the season, while Allen East falls to 0-2 (0-1). Grove visits Ada Friday. Scoring by Quarters Allen East 13 7 6 0 - 26 Col. Grove 16 14 7 12 - 48 Scoring AE- Stewart 1 yd run (kick good) CG- Hoffman 61 yd pass from Grothaus (2 pt conv) AE- Shuey 13 yd pass from Crow (kick failed) CG- Vogt 6 yd run (2pt conv) AE- Kohlriser 37 yd pass from Crow (kick failed) CG- Bogart 60 yd pass from Grothaus (2 pt conv) CG- Vogt 52 yd pass from Grothaus (conv failed) CG- Grothaus 2 yd run (conv failed) AE- Stewart 1 yd run (conv failed) CG- Hoffman 44 yd pass from Grothaus (conv failed) CG- Hoffman 42 yd run (conv failed) ADA 35, SPENCERVILLE 0 Ada 14 0 14 7 - 35 S’ville 0 0 0 0-0 FIRST QUARTER AD — Matt Wilcox 44 pass from Mason Acheson (Hunter Waller kick), 6:29 AD — Kellen Decker 59 pass from Acheson (Waller kick), :35 SECOND QUARTER No Scoring THIRD QUARTER AD — Jacob Ansley 28 pass from Acheson (Waller kick), 8:09 AD — Decker 2 run (Waller kick), 2:04 FOURTH QUARTER AD — Ansley 9 pass from Acheson (Waller kick), 10:55 TEAM STATS Ada Spencerville First Downs 23 9 Total Yards 482 149 Rushes-Yards 4 0 - 1 7 2 40-118 Passing Yards 310 31 Comps.-Atts. 20-27 3-9 Intercepted by 0 3 Fumbles-Lost 1-0 2-0 Penalties-Yards 9-72 0-0 Punts-Aver. 1 - 4 0 7-30.3 INDIVIDUAL ADA RUSHING: Kellen Decker 19-91, Mason Acheson 11-33, Micah Roberson 2-19, Chris James 2-16, Levi Klingler 1-12, Luke Long 3-10, Team 2-(-)9. PASSING: Acheson 20-27-3103-4. RECEIVING: Jacob Anslet 7-106, Matt Wilcox 5-97, Decker 4-77, Brendan Szippl 2-26, Brayden Sautter 1-9, Roberson 1-6. SPENCERVILLE RUSHING: John Smith 18-66, Anthony Schuh 8-33, Colton Miller 9-24, Logan Vandemark 2-8, Hunter Patton 1-(-1), Kyler Oden 1-(-)2, Derek Goecke 1-(-)10. PASSING: Goecke 1-7-18-0-0, Patton 2-2-13-0-0. RECEIVING: Dominick Corso 1-18, Oden 1-9, Vandemark 1-4. The Bulldogs embarked on an 13-play drive — starting at Spencerville’s 48 — that involved three penalties (25 yards in losses) and a 35-yard hitch-and-pitch play that was ready to hit paydirt again. However, an Acheson pass to the left side from the Bearcat 9 was picked off at the goal line by senior Devon Cook; he returned in 60 yards to the Ada 40. Spencerville reached the Ada 12 in two plays, including a personal foul on the Bulldogs. However, the Bulldog ‘D’ stiffened and held on a 4th-and-10 at the 12 with 2:15 remaining in the half. Ada did reach the 49 before a last-gasp effort was dropped, ending the half. After a 3-and-out possession by the home team, Ada went on a 5-play, 67-yarder. At the Bearcat 28, Acheson (who was 10-of-11 the second half for 133 yards) found senior Jacob Ansley (7 grabs, 106 yards - 5 for 75 the 2nd half) on a quick hitch on the left side; he eluded a defender and jetted to the pylon. Waller’s PAT made it 21-0 with 8:09 showing in the third. On its next drive, Ada once more started with excellent field position — the 43. They needed 11 plays to add to the lead. At the Spencerville 2, loaded up in the power-I, Decker took a handoff off right guard and immediately jetted outside to the pylon. Waller made it 28-0 with 2:04 to go in the quarter. “Our defense was on the field a lot of plays,” Zerbe added. “You know you’re not going to hold them down all game long. When we didn’t take advantage of the turnovers we got, those were big moments. Our offense is built for long drives and we didn’t have any; that left our defense out there too long.” Ada finished off the scoring with a 6-play, 46-yard sojourn. At the Spencerville 9, Acheson hit Ansley on a quick-hitter to the left sideline and he did the rest. Waller made it 35-0 with 10:55 left. Spencerville went on its longest drive of the night — 15 plays — that started at the 20 and ended up as senior John Smith (18 totes, 66 yards) was stopped a yard shot on 4th-and-goal from the Ada 3 with 4:07 left. Ada then ran out the clock. “I remember when we had Zach Dysert throw three picks over here one year. Mason just kept going back out and didn’t let it bother him,” Fell added. Spencerville visits Allen East Friday and Ada entertains Columbus Grove. Saturday, September 1, 2012 The Herald — 7 Weekly Athletic Schedule TUESDAY Girls Soccer Coldwater at St. John’s, 5 p.m. New Knoxville at Jefferson, 5 p.m. Kenton at Van Wert (WBL), 5 p.m. Elida at St. Marys Memorial (WBL), 7 p.m. Boys Golf Columbus Grove at Paulding (NWC), 4 p.m. Van Wert at Bath (WBL), 4 p.m. Kalida at Tinora/Antwerp, 4:30 p.m. Shawnee at Elida (WBL), 5 p.m. Volleyball Spencerville at New Knoxville, 5:30 p.m. Ottoville at Van Wert, 6 p.m. Co-ed Cross Country Perry and Shawnee at Spencerville, 4:30 p.m. Elida at Bath tri-match (WBL), 4:30 p.m. Girls Tennis Elida at Shawnee (WBL), 4:30 p.m. Bath at Van Wert (WBL), 4:30 p.m. WEDNESDAY Girls Soccer Continental at Fort Jennings (PCL), 5 p.m. Miller City at Kalida (PCL), 5 p.m. Cory-Rawson at Ottoville, 6 p.m. Boys Golf Spencerville, Allen East and Lima Central Catholic at Jefferson (NWC), 4 p.m. Columbus Grove, Crestview and Ada at Lincolnview (NWC), 4 p.m. Perry at Fort Jennings, 4:30 p.m. Girls Golf Lincolnview at Wapakoneta, 4 p.m. Volleyball St. John’s at Lima Central Catholic, 6 p.m. Jefferson at Miller City (No JV), 6:30 p.m. THURSDAY Boys Soccer Fort Jennings at Continental (PCL), 5 p.m. Archbold at Ottoville (V only), 5 p.m. Liberty-Benton at Spencerville, 5 p.m. Lima Temple Christian at Kalida, 5 p.m. Van Wert at Kenton (WBL), 5 p.m. St. Marys Memorial at Elida (WBL), 7 p.m. Girls Soccer St. John’s at Jefferson, 5 p.m. Bluffton at Lincolnview (NWC), 5 p.m. Boys Golf Jefferson, Lincolnview and Bluffton at Crestview (NWC), 4 p.m. St. John’s at New Knoxville (MAC), 4 p.m. Columbus Grove at Allen East (NWC), 4 p.m. Fort Jennings at Arlington (Sycamore Springs), 4:30 p.m. Ottoville at Ayersville (Country Acres), 4:30 p.m. Elida at Kenton (WBL), 4:30 p.m. Kalida and Leipsic at Miller City (PCL) - Pike Run), 4:30 p.m. Van Wert at Celina (WBL), 4:30 p.m. Volleyball St. John’s at Marion Local (MAC), 5:30 p.m. Ottoville at Jefferson, 6 p.m. Lincolnview at Kalida, 6 p.m. Elida at St. Marys Memorial (WBL), 6 p.m. Kenton at Van Wert (WBL), 6 p.m. Wayne Trace at Crestview, 6 p.m. Girls Tennis Kenton at Elida (WBL), 4:30 p.m. Celina at Van Wert (WBL), 4:30 p.m. FRIDAY Football St. Henry at St. John’s (MAC), 7:30 p.m. Jefferson at Bluffton (NWC), 7:30 p.m. Spencerville at Allen East (NWC), 7:30 p.m. St. Marys Memorial at Elida (WBL), 7:30 p.m. Columbus Grove at Ada (NWC), 7:30 p.m. Van Wert at Kenton (WBL), 7:30 p.m. Wayne Trace at Crestview, 7:30 p.m. Girls Soccer Kalida at Fort Jennings (PCL), 5 p.m. Boys Golf Elida at McClean Invitational (Shelby CC), 8:30 a.m. SATURDAY Boys Soccer Van Wert at Fort Jennings (V only), 1 p.m. Elida at Sylvania Southview, 5 p.m. Girls Soccer St. John’s at Ottawa-Glandorf, 1 p.m. Crestview at Ada (NWC), 1 p.m. Lima Central Catholic at Elida, 2 p.m. Boys Golf Ottoville at Stryker Invitational, 8 a.m. Lincolnview and Crestview at Antwerp Invitational (Pond-ARiver), 8:30 a.m. Volleyball St. John’s, Lincolnview and Spencerville at Kalida Pioneer Invitational, 9 a.m. Pandora-Gilboa at Jefferson, 10 a.m. Elida tri-match, 10 a.m. Van Buren at Columbus Grove tri-match, 10 a.m. Co-ed Cross Country St. John’s, Ottoville, Elida and Kalida at Spencerville Bearcat Invitational, 9 a.m. Columbus Grove, Van Wert and Crestview at Tiffin Columbian Carnival Invitational, 9 a.m. Girls Tennis Van Wert at Elida Invitational, 9 a.m. Jays need quick rebound versus Redskins By JIM METCALFE jmetcalfe@delphosherald.com St. John’s lost a tough Saturdaynight encounter with budding gridiron archrival Lima Central Catholic a week ago. The Blue Jays and head coach Todd Schulte don’t have a lot of time to catch their collective breaths as Division III foe Port Clinton comes to town today for a 1 p.m. kickoff. “This is a senior-dominated and veteran team with 15 starters back. That is one area we are concerned about with us having so many new starters and still struggling to figure things out,” Schulte explained. “They are in the base ‘I’ formation and though they run a lot of the same stuff as we do, they are primarily a power team; they run a lot of powers and try to push you off the ball. Their quarterback (Addison Rospert) is a big key for their offense; he is more of a running threat than throwing the ball. If he can get to the edge — which they try to do a yards) are the two main cogs offenlot with roll-outs — he is dangerous sively for the Jays. Linebackers in the open field. They also Cody Looser (junior; have a lot of size up front 8 solos, 5 assists) and — average 240 from tackle senior Troy Warnecke to tackle — and that size is a (4 and 6; 1 pick), as concern. We’re figuring for well as defensive back a hot and humid afternoon Ben Youngpeter (5 and and them leaning on us all 4), are the top guys on day is a worry. We’re going defense. to have to use what I feel is The Jays had ample a quickness advantage up opportunities in that front to counter that, as well 18-13 season-opening as simply tackle better than loss. last week.” “Whether it was “Defensively, they are a Warnecke things we did well or base 3-man front but against LCC committing misour tight end(s), they will walk the takes, we had chances to take that outside linebacker up over him, per- game all game long. When it came haps on both sides, for a 4- or 5-man to gut-check time, we didn’t come look. They don’t do a lot of blitzing through and that is something we — they like to have their linebackers must address,” Schulte added. “We read and react — but they do stunt saw we did some good things right, and slant a lot with their linemen.” especially looking at the film, but Junior tailback Tyler Jettinghoff we made far too many mistakes. (18 rushes, 63 yards; 5 catches, The good thing is they are fixable: 86 yards) and senior signalcaller we can get better at getting to the Mark Boggs (6-of-17 passing, 93 linebacker on the double-team on offense, for instance, and we can work on tackling better. That’s what we did this week and we will get better as we go.” Toby Hammonds crew returns those 15 starters — seven on offense, eight on defense — from a 5-5 edition last fall (3-4 in the Sandusky Bay Conference). Besides the 5-11, 160-pound Rospert, he operates behind a veteran and all-senior line led by Robert Beck (6-5, 250), Chris Overfield (5-10, 220), Nick Leone (6-1, 225), Ben Petersen (6-1, 280) and Cory Colston (6-0, 220), with a 5-10, 205pound senior fullback (Cody Smith) and 5-11, 160-pound junior Brock Moore the top guy outside. Many of those same faces return and play on the other side of the ball, particularly Smith on the nose, Overfield at linebacker, Rospert and Moore in the secondary and senior end Chris Stokes (6-1, 180) and senior nose/tackle Trey Gluth (6-3, 215). Lady Green throws goose-egg at Lincolnview By JIM METCALFE jmetcalfe@delphosherald.com a great look by sophomore Haley Landwehr; at 28:35, when she did the same to junior Monica Sarka from 14 yards; 14:16, when she stopped a 6-yard header by Eickholt; and 11:43, when she stuffed a 16-yarder by Landwehr. On the other end, the Big Green defense was strong — as it has been all year in yielding a single goal in four matches. Senior goalkeeper Rachel Beining made two stops (7 overall) in denying a 16-yarder by senior Sarah Harris at 24:47 and at 17:27, turning back a 21-yarder by senior Kaylee Thatcher. As well, the wind knocked wide a great open look from 21 yards by freshman Brooke Schroeder. Lincolnview had more chances at the goal in the second half but it wasn’t as if Ottoville didn’t have any. The Lady Lancers had the first good crack at 31:52 but Beining was true in stopping a 20-yard free kick by senior Courtney Gorman. Beining denied a few more shots: at 27:03, a 12-yarder by Kaylee Thatcher; 25:36, a 20-yarder by Harris; and 10:15, a 17-yarder from sophomore Claire Clay. Either that or the Lancers were just off the mark, as at 25:07 when Kaylee Thatcher had a nice chance from nine yards just outside the right post but missed just wide left. On the other end, the Big Green got that all-important insurance goal at 30:12. Off a sequence that included a Julia Thatcher save on a 14-yarder by Landwehr, the Green and Gold got possession again and this time, Eickholt would not be denied. Fed by Sarka, she fired a 16-yarder from outside the left post low and hard to the opposite side for a 2-0 edge. “We knew coming in this would be a tough match; Lincolnview is a very well-organized program for only being on the varsity level two years,” Ottoville head man Tim Kimmet said. “Getting that first goal was important because their goalie made it tough; that was a tough shot on the first one. Getting the second RURAL MIDDLE POINT — A heavy wind coming out of the west played havoc with the Ottoville at Lincolnview girls soccer matchup Friday afternoon at Lincolnview High School. In the end, though, it was the better depth of the Lady Big Green that paced a 2-0 non-league triumph. Ottoville (4-0-0) took advantage of the wind in the first half to control the ball more — outshooting the Lancers (2-1) 10-3 on-goal (14-9 for the contest). They got the early stake — at 32:08 — when senior Rachel Turnwald got possession on a pass from junior Kendra Eickholt along the right post, made a quick move and went high from 16 yards over sophomore netminder Julia Thatcher (9 saves) for the 1-nil edge. Several diving saves by Thatcher kept the score there in the first half, especially at 31:30 when she denied Big Green edged Lancers in non-league boys matchup By BOB WEBER The Delphos Herald btzweber@bright.net OTTOVILLE — The Ottoville Big Green boys soccer team ran its record to 4-1 on the season with a close 2-1 win over the visiting Lincolnview Lancers Friday night. The first half saw both teams battle hard for the first 20 minutes with good defensive play and aggressive goalkeeping, especially from Lancer senior goalie Mark Evans. At the 18:24 mark, the Big Green found its scoring leader this season — senior Anthony Eickholt — streaking down the right side of the field and he sent a laser across the goal mouth, finding the left corner of the net and giving the home team a 1-0 lead. The Big Green had several other opportunities in the half but were unable to connect for another score. The Lancers struggled offensively in the half with only three shots on-goal. Their best opportunity came at the 4:45 mark when sophomore Wyatt Schmersal received a pass from freshman Cole Schmersal and sent a shot on-goal but Big Green goalie Colin Bendele was up to the task with the save, preserving the 1-0 lead into the break. The fans had hardly got back to their seats in the second half when Big Green senior Dylan Klima — off the opening whistle — received a pass from fellow senior Logan Gable and his shot eluded Evans in goal at the 39:43 mark, sending the Big Green to a 2-0 lead. Big Green Head Coach Eric Gerker leaned on his defense the remainder of the half to try to protect the lead. Gerker is blessed with three excellent defenders in seniors Bryan Hohlbein and Matt Burgei and sophomore Austin Honigford. Gerker knows that Burgei, coming off of an early-season injury, is a key to the success of the Big Green this year: “Matt coming back is a big plus for us. I didn’t want to play him the whole game but he kept saying he was OK. He put a lot of good minutes in tonight and just having him back is a big emotional lift for the team.” The Lancers, at the 5:40 mark, received a free kick opportunity after an elbowing penalty was called on the Big Green. Junior Conner McCleery sent a shot towards the goal that was cleared by the defense; however, the next opportunity at the 4:59 mark found freshman Austin Leeth with the ball in point-blank range and beat the Big Green goalie to tighten the score to 2-1. The Lancers never threatened the goal for the remainder of the half and the Big Green came away with a narrow win over the Lancers. Coach Gerker was overall pleased with his team’s performance and was quick to praise the was another tough one; being against the wind made it even better.” The lefty-footed Eickholt almost got a third Ottoville tally at 16:23 on a swinging corner kick from the right side but it hit the near post and the ball was cleared away. Julia Thatcher made another diving stop to stymie a 12-yarder by Turnwald. “We came out flat and didn’t have intensity. Had we done so, it either is tied or we have a couple of goals,” Lancer head coach Katrina Smith relayed. “We played with more intensity the second half but we didn’t catch a break; we figured on the wind to keep blowing hard after going against it the first half but it died down. Plus, with only three subs, we don’t have the depth but we’re getting in better shape each match.” Ottoville welcomes in CoryRawson 6 p.m. Wednesday, while Lincolnview hosts Bluffton 5 p.m. Thursday in a Northwest Conference match. Lancers: “We did a lot of good things tonight. Our possession game made some strides tonight. We controlled the ball most of the game, attacked the net several times and kept their goalie busy throughout the game. They’re a good team; they’ve come a long way in a couple of years they’ve had their program. Their going to be very competitive for the next couple of years; they gave us all we could ask for tonight.” The Lancers (2-2-0) will not play again until Sept. 15 when they travel to Cory-Rawson for a 11 a.m. start. The Big Green (4-1-0) will host Archbold next Thursday for a 5 p.m. start. Lincolnview 0 1 - 1 Ottoville 1 1 - 2 Shots On-Goal: Ottoville 12, Lincolnview 7. Saves: Ottoville Bendele 6, Lincolnview - Evans 9. Goals: Ottoville - Anthony Eickholt, Dylan Klima; Lincolnview - Kade Carey. OHIO DEPARTMENT OF NATURAL RESOURCES Division of Wildlife Weekly Fish Ohio Fishing Report CENTRAL OHIO Buckeye Lake (Fairfield/ Licking/Perry counties) - As water temperatures start to cool, hybrid-striped bass will again feed more actively; try chicken livers fished on the bottom or troll spinners along the north shore from Seller’s Point to the north boat ramp at SR 79. Channel catfish are being taken right now using cut bait on the bottom. Crappie action will start to pick up soon; use minnows or jigs fished around woody cover. O’Shaughnessy Reservoir (Delaware County) - This 912acre site north of Columbus is a good place to catch largemouth bass and channel catfish. For largemouths, try plastics, spinners and crankbaits around shoreline cover; target drop-offs and points. Channel cats can be caught on cut bait, nightcrawlers and shrimp fished on the bottom; fish the flats in the south end and the river channel in the north end. Crappie are being caught in the channel around woody cover using minnows and jigs. NORTHWEST OHIO Clear Fork Reservoir (Richland/Morrow counties) Located just 8 miles south of Mansfield along SR 97, this 971acre site is well known for its muskellunge population; it is one of the 8 lakes stocked in Ohio. However, the reservoir also has good populations of largemouth bass and bluegill. Bluegill fishing should be excellent now with fish ranging from 5-7 inches, with an occasional 9-incher being taken; try wax worms or worm pieces under a bobber along the edges of weed beds. Largemouth bass fishing should also be excellent;, this site —. Killdeer Plains Pond #30 (Wyandot County) - This pond is located southeast of Harpster, off TWP Highway 125; just south of the railroad tracks, turn west and follow the gravel lane back to the pond. Largemouth bass should be biting now; try the west bank in the mornings or evenings with a weedless soft top-water bait over the weed beds. A jig-andpig fished along the weed line and in open-water pockets is also effective. No ramp is available; however, small boats may be used with a 10-HP limit. Wading is also popular along the east and south shores. NORTHEAST OHIO West Branch Reservoir (Portage County) - Persistence is the name of the game for pursuing one of Ohio’s largest sport fish - the muskie; one photo of a landed muskie can take your status among your fishing buddies from a worm-drowner to a fishing legend. This site offers an excellent opportunity to land one of these status-changers but it requires putting a decent amount of time in; according to last year’s creel survey results, those pursuing muskies had a catch rate of one for every 20 hours of fishing. The action has picked up in the last week with quite a few landed muskies; spend more time trolling open water with crankbaits just above the thermo- cline level. Be sure to use linecounter reels for accurate trolling depths and talk to local muskie fishermen and fish biologists for information on how deep to fish. Muskies can still be caught casting off the ends of deep bars, humps and standing timber with lures that will retrieve in the 6- to 10-foot range. Portage Lakes (Summit County) - Spool the reels, sort the tackle, hook up the trailer, gas up the boat, etc...A day out on the lake fishing can sometimes feel like a lot of work. You may find it refreshing to go back to the basics and head out for some panfish here. Grab the ultralight, a small container filled with pin-mins, split-shots and a couple of bobbers and throw it all into a bucket; a quick stop at the local bait shop for some wax worms and you are ready to go. Nice catches of bluegill, redears and pumpkinseeds have all been reported this past week here. There are several areas where there is shoreline access; focus on either woody snags or the edges of weed lines. SOUTHEAST OHIO Piedmont Lake (Belmont County) - Largemouth bass fishing should start picking up; use a variety of crank/spinner baits cast along the shoreline. Shad will start moving into the lower end, making shad-colored baits the most successful. Smallmouth bass should also be biting well; try shallow points in 3-5 feet of water in the early evening, night or early morning. Tube jigs are popular, as are spinner baits which can be used with a slow retrieve or allowed to helicopter down. Catfish anglers should find continued success by tight-lining off the bottom with cut bait, chicken liver and nightcrawlers. AEP ReCreation Land (Morgan/Noble counties) Anglers can start to expect success for largemouth bass, sunfish and channel catfish. For an effective bass rig, try black plastic worms during daytime or top buzz baits during the night and early dusk; anglers also like Power Worms in dark colors which include purple, motor oil and black. Sunfish can be caught with basic wax worms and bobber but quality is limited except were anglers walk off to more excluded water areas. Catfish can be caught using stink baits and nightcrawlers. SOUTHWEST OHIO Caesar Creek (Clinton/ Greene/Warren counties) Those casting in-line spinners and crankbaits are catching muskellunge here and in smaller creeks leading into the lake; if you catch one, please report it to the DOW’s Muskie Angler Log at. com/muskielog/welcome.aspx, developed in partnership with the Ohio Muskie Anglers as a resource and to support management efforts by providing valuable catch data to the division. Saugeye anglers are catching a few 15- to 18-inch fish from 6- to 15-foot depths tight-lined along the bottom in 5- to 8-foot depths. C.J. Brown Reservoir (Clark County) - A few walleye are being caught using crankbaits, jigs with plastic bodies or curly tails, small spinners or live minnows, leaches or nightcrawlers; good curly-tail color choices are white, orange, pink and chartreuse. Fish by slowly jigging, trolling or drifting baits in 10- to 15-foot depths; anglers report that the most successful bait has been silver or gold blade baits in the main lake river channel, around structure and over the humps in the very early morning. Most are undersized but some legal fish are being caught. REMEMBER all walleye less than 15 inches must be immediately released. Channel cats are being caught using shad, shrimp, nightcrawlers and chicken livers in the upper end tight-line or slowly drifting the bait along the bottom in 3- to 6-foot depths. OHIO RIVER Belleville Pool Area Catches of sport fish have been slow but anglers are catching some black bass on a variety of lures, including spinner baits, drop-shots and crankbaits. Catfish are biting fairly well throughout, both for shore and boat anglers; flatheads are being caught on live baitfish on bottom near drop-offs and structure. Channel cats are being caught everywhere using nightcrawlers, chicken livers, old shrimp and ripened chicken breasts seasoned with garlic powder or garlic salt; anglers also report some sizable freshwater drum as incidental catches. Western Ohio River (Cincinnati to Adams County) Fishing has been slow with most action around Meldahl Dam or the tributaries running into the river; try chicken livers or cut bait for catfish. Blue cats are being taken in the downtown Cincinnati area on skip jack. LAKE ERIE Daily Bag Limit (per person) Regulations to Remember: Walleye (on Ohio waters of Lake Erie) - 6 (minimum size limit is 15 inches); Yellow perch (on all Ohio waters of Lake Erie) - 30; Trout/salmon - 2 (minimum size limit is 12”); Black bass (largemouth and smallmouth bass) - 5 (minimum size limit is 14”). Western Basin: Walleye fishing has been good NE of Niagara Reef and “C” can of the Camp Perry firing range and W of Rattlesnake Island; trollers have been using worm harnesses with inline weights or divers, plus divers with spoons. ... Yellow perch fishing has been good, with the best spots being the Toledo harbor light, buoy 13 of the Toledo shipping channel, around “B” and “C” cans of the Camp Perry firing range, W of Green and Rattlesnake islands and between Lakeside and Kelleys Island; perch-spreaders with shiners fished near the bottom produce the most. Central Basin: Walleye fish- ing has been good offshore at the weather buoy near the Canadian border N of Vermilion. Excellent fishing continues in 70-71 feet of water NE of Ashtabula; trollers are using wire-line off planer boards and dipsy divers, with purple, pink, blue, green, orange and brown spoons and stick baits. ... Yellow perch fishing has been good E of the Huron River channel buoys and off of the Castle near Ruggles Reef. Farther east, fishing has been excellent, especially in 47-53’ of water NE of the Cuyahoga River (water intake crib), in 52-53’ N of Wildwood State Park, in 51-58’ NW of Fairport Harbor (the hump) and in 57-58’ N of Conneaut; perch-spreaders with shiners fished near the bottom produce the most. The best shore fishing spots are the Cleveland Piers and at Headlands Beach Pier in Mentor and the Fairport Harbor Pier using spreaders with shiners in the mornings and evenings. ... Smallmouth bass fishing has been very good at 15-23’ around harbor areas in Cleveland, Fairport Harbor, Geneva, Ashtabula and Conneaut; this past week, anglers are having good luck using crayfish, dropshot rigs and tube jigs. ... White bass has been spotty but can pick up at any time; try near shore in 15-30’ N of Cleveland Harbor, NE of Gordon Park (Bratenahl) and in 10-20’ N of Eastlake CEI. Look for gulls feeding on schools of shiners at the surface; the bass will be below the shiners. Shore anglers are catching bass off the Eastlake CEI breakwall using agitators with jigs tipped with twister tails or using small spoons. ... Steelhead trout anglers are catching a few fish while trolling for walleye off Ashtabula; some large ones have been caught. See locations for walleye above. ... The water temperature is 73 degrees off of Toledo and 73 degrees off of Cleveland, according to the nearshore marine forecast. Anglers are encouraged to always wear a U.S. Coast Guard-approved personal flotation device while boating. 8 – The Herald Saturday, September 1, 2012 NEW BOOKS AT THE LIBRARY Various adult programs are being scheduled for the fall season at the library. Everything from planting bulbs and travel to Vietnam to making Christmas crafts and antique appraisal will be offered. We will be having something to interest everyone. Be sure to check at the library for specifics on these programs or check our website for further details as the dates are set. The week of Sept. 10-15 will be the library’s annual Book Sale. We will again be offering books, paperbacks, magazines and VCR tapes. Mark the dates on your calendar. 7 New DVD titles were added to our collection this month: Scooby-Doo Laff-ALympics: Spooky Games Scooby-Doo 2 : Monsters Unleashed Sherlock Holmes: A Game of Shadows S p o n g e b o b Squarepants: Tales From the Deep Twinkle Toes Wrath Of The Titans The Zinghoppers Live!: A Dance Party Concert! FICTION You Don’t Want To Know – Lisa Jackson. Ave has spent most of the past two years in and out of Seattle mental institutions, shattered by grief and unable to recall the details of Noah’s disappearance. Now back at the family estate, her strength is returning. But Ava can’t shake the feeling that her family, and her psychologist know more t h a n they’re saying. Unwilling to trust t h o s e around her, Ava secretly visits a hypnotist to try and restore her memories. Strange visions and night terrors keep getting worse. Ava is s u r e she’s heard Noah crying in the nursery, and glimpsed him walking near the dock. Is she losing her mind, or is Noah still alive? Ave won’t stop until she gets answers, but the truth is more dangerous than she can imagine. Sunset Bridge – Emilie Richards Former socialite Tracy Deloche has nothing but five ramshackle beach cottages and the unlikely friendships she’s formed with her tenants: Wanda, a wise waitress turned pie-shop owner; Janya, a young Indian wife in an arranged marriage; Alice, a widow raising her tween-age granddaughter; and Maggie, Wanda’s daughter, a former Miami cop with a love life as complicated as Tracy’s own. As a tropical storm brews, the wind carries surprises and secrets to Happiness Key, and five friends will discover just how much they need one another. Whispers In The Wind – Lauraine Snelling After fleeing N o r t h Dakota and the n o w defunct W i l d younger son Lucas? TEXAS BLUE – Jodi Thomas Gambling man Lewton Paterson wants to marry into a respectable family, even if it costs him his friendship with Duncan McMurray. After fleecing a train ticket from one of the gentlemen picked to call on Duncan’s cousins, Lewt makes his way to Whispering Mountain. He soon realizes that to entice a McMurray sister, he’ll need to learn a thing or two about ranching—and love. When the suitors arrive, Emily McMurray convinces a friend to take her place, as she has no intention of ever getting married. But when Lewt insists that Emily t e a c h him about ranching, she finds h e r s e l f struggling to keep up both her disguise and the walls around her heart. N O N FICTION Bullied: What every parent, teacher, and kid needs to know about ending the cycle of fear – Carrie Goldman When the author, a blogger for the on-line community of the Chicago Tribune, posted about her six-year-old daughter being bullied at school because she was sporting a Star Wars backpack and water bottle, cyberspace rose to her defense with a flurry of posts, e-mails, and letters. Goldman decided to delve more deeply into the subject, discovering that 160,000 children stay home every day from school because of bullying, 42% of kids have been bullied online, and one in five teens has been bullied at school in the previous year.. This is an eye-opening, prescriptive, and ultimately uplifting guide to raising diverse, empathetic, tolerant kids in a caring and safe world. Country Comfort: Cooking a c r o s s America – Mary Roarke T h i s book is a keepsake recipe collection highlighting popular ingredients from each region of the United States. This cookbook is perfect for anyone looking to take a cross-country culinary tour of America and discover its vast food heritage. Over 175 enticing recipes are included with accompanying anecdotes from cooks throughout the country. From the quaint seaside towns of the Northeast to the surfing villages of the West Coast, this cookbook is sure to provide you and your family with an endless variety of traditional and modern dishes all year long. Picker’s Bible: How to pick antiques like the pros – Joe Willard Whether you are a dumpster diver, estate sale addict, or modern archaeologist, this easy-to-use and informative guide to “picking” is guaranteed to improve your antiquing skills. This book. MEMORIALS Two Is For Twins –Wendy Lewison Big Rigs On The Move – Candice Ransom Monster Trucks On The Move – Kristin Nelson The Big Red Tractor And The Little Village – Francis Chan What Little Boys Are Made Of—Robert Neubecker Too Many Dinosaurs – Mercer Mayer Night Knight – Owen Davey Pete The Cat: I Love My White Shoes – Eric Litwin Dini Dinosaur – Karen Beaumont I Know A Wee Piggy – Kim Norman I’m Fast! – Kate & Jim Mcmullan Take Two! – J. Patrick Lewis & Jane Yolen Fun At The County Fair In memory of: Drew Knippen Given by: Drew’s preschool class Mrs. Spitzer’s Garden –Edith Pattou In honor of: John & Mary Lou Wittler’s 50th wedding anniversary Given by: Irene Calvelage YOUR #1 SOURCE FOR NEWS All the news you need to know - right here in black and white! Stay on top of current events in your area and around the world with our local, national and international news reports. The Delphos Herald Your #1 Source for Business. To subscribe, call 419-695-0015 Tomorrow’s Horoscope By Bernice Bede Osol SUNDAY, SEPTEMBER 2, 2012 A busier than usual social life is likely to be in the offing for you in the year ahead. What makes this so different is the fact that you could find yourself involved with several different and unrelated groups of. MONDAY, SEPTEMBER 3, 2012 If you show strong initiative and much diligence, you won’t go unrewarded in the year ahead. Set some serious goals and use your assets wisely in order to make your mark in the world both socially and materially. VIRGO (Aug. 23-Sept. 22) -- Involvement with some bold and daring friends will do your cautious nature a lot of good. Keep an open mind and figure out what you can learn from these chums. LIBRA (Sept. 23-Oct. 23) -- Onthe-spot decision-making won’t work out too well for you at present. Take plenty of time to weigh and balance all critical issues. SCORPIO (Oct. 24-Nov. 22) -In order for the day to be meaningful, it’s important that you spend some time on things on important matters. If you waste your time fooling around and doing nothing, you’ll regret it. SAGITTARIUS (Nov. 23-Dec. 21) -- As long as you don’t involve yourself with persons who take games too seriously, activities that have elements of friendly competition could be very gratifying for you. CAPRICORN (Dec. 22-Jan. 19) -- Even though you might have some disturbing factors to deal with, once you start a task or an assignment, chances are you will follow it through to its conclusion. AQUARIUS (Jan. 20-Feb. 19) -There are a number of friends you’ve been too busy to see lately who are anxious to get together with you. If you know who they are, surprise them by contacting them for a chat. PISCES (Feb. 20-March 20) -Things will work out well for you in areas where you focus your attention. You’ll be able to generate some great ideas to make or save money, if you put your mind to it. ARIES (March 21-April 19) -- Assume the initiative instead of waiting to be taken care of by others, especially if you want certain things to be done now. Others can wait -you can’t. TAURUS (April 20-May 20) -Even if you should find yourself in a quiet, reclusive mood, you can use it productively. Clean up all those jobs that you need to do alone. GEMINI (May 21-June 20) -Don’t allow your social interests to dominate you to a point that it causes you to set aside or reschedule several urgent matters. Important things you neglect now will jump up and bite you later on. CANCER (June 21-July 22) -In order to be successful, you need to know what you want, how you want it done and when you’re going to do it. What you put off doing until later will never get done. LEO (July 23-Aug. 22) -- If you’ve already made a decision about something, stop rehashing it and get on with it. Overanalyzing it will merely confuse you further and completely jam up your flow. TUESDAY, SEPTEMBER 4, 2012 Friends and associates are likely to play constructive roles in important affairs in the year ahead, especially in areas that you think need some improvement. With everybody pitching in to help, it’s inevitable that you’ll succeed. VIRGO (Aug. 23-Sept. 22) -The social sphere in which you’ll be operating is likely to be charged with an air of expectancy. You’ll love it, because it tends to make everything seem more exciting. LIBRA (Sept. 23-Oct. 23) -Because you’re prepared to work for what you get, you’ll be in an extremely favorable financial cycle. You won’t expect any free rides, and the rewards will seem bigger because of this. SCORPIO (Oct. 24-Nov. 22) -- A friend in whom you place considerable confidence will have several constructive suggestions for you. Give his or her ideas a shot -- they are likely to help you resolve a problem. SAGITTARIUS (Nov. 23-Dec. 21) -- Conditions look to be favorable, but your greatest breaks are likely to come in the financial or commercial realms, even though you may not be looking them in those quarters. CAPRICORN (Dec. 22Jan. 19) -- You’re presently in an extremely favorable cycle in terms of popularity. Before the period is over, you could pick up scads of new friends and admirers. AQUARIUS (Jan. 20-Feb. 19) -- Instinctively, you will know how to make some pretty smart moves in order to give your family certain things they desire. Just do what comes naturally, and you’ll come out ahead. PISCES (Feb. 20-March 20) -- You always seem to have an abundance of ideas that are extremely satisfying and feasible, and they’ll be better than usual at present. Share your thinking with those who’ll appreciate it. ARIES (March 21-April 19) -Your chances for getting something that you really want are better than usual at this time. If you have enough motivation, you won’t hesitate to go after the big fish. TAURUS (April 20-May 20) -It shouldn’t be too difficult, especially in matters that pertain to your career. CANCER (June 21-July 22) -Any suggestion you make is likely to be a good one, especially if it’s workrelated. Don’t hesitate to express what’s on your mind. LEO (July 23-Aug. 22) -Don’t be intimidated by challenging developments, because you are likely to perform exceptionally well when your mettle is tested. The secret is to believe in your abilities. Delphos American Legion Post 268 proudly announces VETERANS’ APPRECIATION DAY Delphos American Legion •1:00 p.m. Beverage Tent Opens •4 :00 p.m.-Corn Hole Tournament, Bingo 2nd Annual SUNDAY, SEPT. 2 BBQ Chicken & Pork Chop Dinner until sold out. •4:00-7:00 p.m. - Karaoke •7 p.m.-midnight Garry Stennet & John Heaphy This message published as a public service by these civic minded firms. Interested sponsors call The Delphos Herald Public Service Dept. 419-695-0015 AUTO DEALERS AUTO PARTS •Delpha Chev/Buick Co. •Lehmann’s Furniture •Westrich Home Furnishings •Omer’s Alignment Shop •Delphos Ace Hardware & Rental FURNITURE GARAGE •Pitsenbarger Auto FINANCIAL INSTITUTIONS •First Federal Bank HARDWARE What craft beers can teach us TERRY MATTINGLY Saturday, September 1, 2012 The Herald – 9 On Religion ‘craft churches.’ Craft brewers do not create the product to be the next ‘bigAmericans -- ‘robust’ ‘fruits of the vine,’ perhaps beer can teach us something.” (Terry Mattingly is the director of the Washington Journalism Center at the Council for Christian Colleges and Universities and leads the GetReligion.org project to study religion and the news.); 3:30 Wedding Sunday-9:00 a.m. Worship Service Monday: Labor Day - Office Closed Tuesday - Altar Guild Wednesday - 7:00 p.m. InReach/ OutReach Meeting Saturday - 8:00 a.m. Prayer Breakfast Sunday - 9:00 a.m. Rally Day; 10:00 a.m. Worship Service; 11:00 a.m. Carry-In Dinner/Communion; 9:15 a.m. Seekers Sunday School class meets in parlor; 10:30 a.m. Worship Service/Communion; 11:30 a.m. Radio Worship on WDOH Mon.: OFFICE CLOSED - LABOR DAY Tues.: 7:00 Outreach Committee; Grandparents pictures due in Office. Wed.: 7:00 Choir Practice Begins (Everyone Welcome). CORNERSTONE BAPTIST KINGSLEY UNITED METHODIST CHURCH 15482 Mendon Rd., Van Wert 2701 Dutch Hollow Rd. Elida Phone: 419-965-2771 Phone: 339-3339 Pastor Chuck Glover Rev. Frank Hartman Sunday School - 9:30 a.m.; Sunday - 10 a.m. Sunday School (all ages); 11 a.m. Morning Service; Worship - 10. V 112 E. Third St. Lucy Pohlman 419-339-9196 Schmit, Massa, Lloyd 419-692-0951 Rhoades Ins. 419-238-2341 419-692-3413 Boarding Kennel and Grooming The Animal House. FAITH BAPTIST CHURCH Family Worship Hour; 6:30 p.m. 4750 East Road, Elida Evening Bible Hour. Pastor - Brian McManus Wednesday - 6:30 p.m. Word Sunday – 9:30 a.m. Sunday of Life Student Ministries; 6:45 School; 10:30 a.m. Worship, nurs- p.m. AWANA; 7:00 p.m. Prayer ery available. and Bible Study. Wednesday – 6:30 p.m. Youth Prayer, Bible Study; 7:00 MANDALE CHURCH OF CHRIST p.m. Adult Prayer and Bible Study; IN CHRISTIAN UNION 8:00 p.m. - Choir. Rev. Don Rogers, Pastor Sunday– 9:30 a.m. Sunday GOMER UNITED CHURCH School all ages. 10:30 a.m. OF CHRIST Worship Services; 7:00 p.m 7350 Gomer Road, Gomer, Ohio Worship. 419-642-2681 Wednesday - 7 p.m. Prayer gomererucc@bright.net meeting. Rev. Brian Knoderer Sunday – 10:30 a.m. Worship PENTECOSTAL WAY CHURCH Pastors: Bill Watson Rev. Ronald Defore an Ert ounty. Van Wert, Ohio utnam ounty 419-238-9426 Rev. Clark Williman. Pastor Study. Sunday- 8:45 a.m. Friends and Thursday - Choir Rehearsal Family; 9:00 a.m. Sunday School Anchored in Jesus Prayer LIVE; 10:00 a.m. Line - (419) 238-4427 or (419) 232-4379. SALEM UNITED Emergency - (419) 993-5855 9:30 a.m. - Worship; 10:45 a.m. - School; 11:00 Church Service; Sunday school; 6:30 p.m. - Capital 6:00 p.m. Evening Service Funds Committee. Wednesday - 7:00 p.m. Evening Monday - 6 p.m. Senior Choir. Service LIGHTHOUSE CHURCH OF GOD Elida - Ph. 222-8054 Rev. Larry Ayers, Pastor Service schedule: Sunday– 10 a.m. School; 11 a.m. Morning Worship; 6 p.m. Sunday evening. We thank the sponsors of this page and ask you to please support them. Stop in & See Us After Church For Sunday Rolls! W C 662 Elida Ave., Delphos 419-692-0007 Open 5 a.m.-9 p.m. p C 10098 Lincoln Hwy. Van Wert, OH 419-238-9567 Alexander & Bebout Inc. Foster Parents Needed! Phone 419-302-2982 animalhousekennels.com 20287 Jennings Delphos Rd. Delphos, Ohio 45833 GOOD FOOD COOL TREATS • Burgers • Fries • Shakes • Ice Cream. The Main Street 107 E. Main Street • Van Wert, OH 419-238-2722 Ice Cream Parlor Pastor: Rev. Ron Prewitt Sunday - 9:15 a.m. Morning worship with Pulpit Supply. 419.238.1695 or 10 – The Herald Telling The Tri-County’s Story Since 1869 MAINTENANCE TECHNI- Would you like to be an CIAN. Verifiable mechani- in-home child care pro To place an ad phonevider? Let us help.ext.- 122 419-695-0015 Call cal and electrical experiFREE ADS: 5 days free if accepted THANKS TOChild Care Re - at the ence. Resumes item is free YWCA ST. JUDE: Runs 1 day Minimum Charge: 15 words, Deadlines: source and or less than $50. Only 1North ad, 1 price of $3.00. Referral at: 2 times - $9.00 atmonth. E. item per St., GARAGE SALES: Each day is $.20 per 200 11:30 a.m. for the next day’s issue. Lost & Found Notice Help Wanted ad per 1-800-992-2916 or Each word is $.30 2-5 days REPLIES: $8.00 if Saturday’s paper is 11:00 a.m. Friday BOXSpencerville or at:you come word. $8.00 minimum charge. $.25 6-9 days NOT BE RESPONSIBLE FOR and pick them up. $14.00 if we have to “I WILL (419)225-5465. pkimmet@flexiblefoam.com Monday’s paper is 1:00 p.m. Friday $.20 keys sendNO PHONE CALLS AT DEBTS”: Ad must be placed in person by them to you. FOUND SET of 10+ days Are a.m. Thursday Herald Extra is 11 you looking for a child CARD OF THANKS: $2.00 base the person whose name will appear in the ad. Each Lincoln Hwy. east of months word is $.10 for 3 along care provider in your THIS PLEASE. Must show ID & pay when placing ad. Regucharge + $.10 for each word. or h . We accept Delphos. P more prepaid lar rates apply Child Care area? Let us help. Call 419-695-4120. IMMEDIATE POSITIONS YWCA Child Care Re for Full-time Drivers. DediOn S.R. 309 in Elida source and Referral at: FOUND: BLACK Terrier cated Routes/Home daily. LOOKING FOR a reliable 1-800-992-2916 or dachshund mix on Lima Full benefits including Part-time child care pro(419)225-5465 • Grass Seed Ave., Tuesday 8/29. Call 401K, Dental & Vision, vider for 2 children on 419-695-7706 • Top Soil • Fertilizer Paid vacations & Holidays. Thursday thru Sunday CDL Class A required. nights. If interested, call • Straw HIRING DRIVERS 2yrs experience. Good 207-745-3963 and leave a Announcements ON STATE RT. 309 - ELIDA with 5+ years OTR experi- MVR. Call 419-733-0642 message. 419-339-6800 ence! Our drivers average or email: 42cents per mile & higher! dkramer_mls@aol.com Home every weekend! Financial $55,000-$60,000 annually. Services Benefits available. 99% no STEEL TECHNOLOGIES touch freight! We will treat is a customer driven, IS IT A SCAM? The Delyou with respect! PLEASE growth-oriented, steel phos Herald urges our processing company that readers to contact The LAMP REPAIR CALL 419-222-1630 provides value-added re- Better Business Bureau, Table or floor. OTR SEMI DRIVER sources and services to its ( 4 1 9 ) House For Rent Come to our store. 223-7010 or NEEDED customers. We are cur- 1-800-462-0468, before Hohenbrink TV. Benefits: Vacation, rently seeking PRODUC- entering into any agree419-695-1229 2 BEDROOM, 1 Car Holiday pay, 401k. Home TION ASSOCIATES who ment involving financing, Garage. $475/mo plus weekends & most nights. are eager to work and business opportunities, or deposit and utilities. Call Ulm!s Inc. contribute to our continued work at home opportuniHelp Wanted 408 S. Jefferson St. 419-692-3951 success in our Ottawa, ties. The BBB will assist 419-692-6241 OH facility. Must be able in the investigation of LPNS NEEDED for home- to work all shifts. We offer these businesses. (This 2 BEDROOM, 1Bath DANCER LOGISTICS Inc. care in Lima area for 3rd an excellent benefits pack- notice provided as a cus- house available soon. No 900 Gressel Drive, Delshift. HHA/STNAs needed age, perfect attendance tomer service by The Delpets. Call 419-692-3951 phos, OH 45833 is in need in Lima, Wapak, Van Wert and Plant incentive bo 340 W. Fifth St. of a Maintenance Service and Delphos areas. Day- nuses every 3 months, phos Herald.) 4-BEDROOM HOUSE for Manager to monitor our Rent in the country. Call time and evening hours 401(k) plan with company Delphos, OH fleet of tractors and trailmatch, safety shoe allow419-303-0009 available. Apply at ers. The service manager 45833 Wanted to Buy Interim HealthCare ance, and paid will coordinate the work 3745 Shawnee Rd., Lima vacation/personal days. 419-695-5934 Apts. for Rent needed on the equipment or call 419-228-2535 Apply in person at: and direct the technicians Steel Technologies, Inc. accordingly. This person 1 BEDROOM mobile REGIONAL CARRIER 740 Williamstown Road will be responsible for the LOOKING FOR LOCAL home for rent. Ph. Ottawa, OH 45875 supervision and delega419-692-3951. CLASS A CDL DRIVERS. EOE tion of the after hours 2 yrs. experience required service communications. 1BR APT for rent, appliwith tractor/trailer combiScrap Gold, Gold Jewelry, ances, electric heat, launPreferred candidate will nation. Bulk hopper/pneuSilver coins, Silverware, We need you... In the Classifieds have worked in a similar dry room, No pets. Pocket Watches, Diamonds. at Vancrest position for at least two matic work -company will $425/month, plus deposit, train. Must have good 2330 Shawnee Rd. Health Care Center years. If interested in this water included. 320 N. please contact MVR. F/T -no weekends, Lima Jefferson. 419-852-0833. The Daily Herald position 419-692-1435 or home holidays, with opSTNAs Shawn at (419) 229-2899 portunity to be home dur- Vancrest of Delphos is FORT JENNINGS- Quiet submit a resume at the ing the week. P/T work a long-term care facility secure 1 & 2 bedroom in address noted above. also available. Assigned providing skilled rehaan upscale apartment trucks. Last year our driv- bilitation services, ascomplex. Massage theraHousehold Goods pist on-site. Laundry faciliers averaged 47 cents per sisted living, post acute all odometer miles includ- medical care and more. ties, socializing area, garing safety bonuses. Em- We are looking for car- FRIGIDAIRE den plots. Cleaning and ployment Benefits: Health, ing, outgoing, energetic, SIDE-BY-SIDE refrigera- assistance available. ApDental & Life Insurance. skilled STNA’s to join tor with ice & water in pliances and utilities inShort/Long term disability. our team. Full time and door, ice maker not work- cluded. $675-775/mo. Paid holidays & vacation. part time positions are i n g , $ 4 0 0 . 419-233-3430 401K with company contri- available, for all shifts. Ph.419-286-2191. butions. Come drive for us Visit us at Vancrest for LARGE UPSTAIRS 3500 Elida Road and be part of our team. details and application Apartment, downtown Lima, Ohio 45808 Apply in person: Delphos. 233-1/2 N. Main. information. Phone: (419) 331-0381 Pets & Supplies D & D Trucking & 4BR, Kitchen, 2BA, Dining Fax: (419) 331-0882 Services, Inc. area, large rec/living room. Vancrest of Delphos 5025 North Kill Road $650/mo. Utilities not inEmail: LisaW@allannott.com FREE: LAB/PIT puppies. 1425 E. Fifth St. Delphos, OH 45833 cluded. Contact Bruce Adorable! Please call Delphos, OH 45833 419-692-0062 or 419-236-6616 419-204-8662 855-338-7267 Classifieds 005 020 080 We Have: Saturday, September 1, 2012 Help 080 T Wanted HE 550 DELPHOS 080 HERALD Help Wanted Pets & Supplies 095 010 ENROLL TODAY 040 120 Kreative 080 Learning Preschool ACCEPTING CHILDREN 3-5 290 Place A Help Wanted Ad Call Raines Jewelry Cash for Gold 419 695-0015 300 550: OPEN HOUSE 9am-5pm Fri., Sat. & Sun. 19176 Venedocia-Eastern Rd., Venedocia Beautiful country 4 bedroom, 1 1/2 bath, oversized 2 car garage. Updated everywhere. Must See! $89,900. Approx. monthly payment - $482.60 Ohio Department of Transportation Van Wert County Seeking qualified Full-Time PERMANENT & TEMPORARY WINTER Highway Technician 1 position Salary $15.41/hour Required: Commercial Driver’s License, Class B with TANKER endorsement and without air brake restriction Applicant must pass Physical Ability, Reading & Math Tests and take Pre-employment Drug Test To apply go to: An Equal Opportunity Employer home. 419-692-3951. Dear Annie: For ably couldn’t afford it.” the past three summers, But they were able to my friend “Don” has afford everything else, spent a few days with plus a honeymoon! me at our family beach This is not the first house. The second year, time he’s been stiffed, Advertise he hinted about visiting although bridal etiquette it Your Business again Iand was pleased saysthe is customary to when invited him back. pay clergyman $150 Soon, he began refer- to $500 for his services. ring to “his room” at the One couple offered to beach house and take us to dinner, making regular but never did. comments about Another couple “next year.” I gave him frozen didn’t know fish. how to respond, Please tell so I ignored the bridal couples to 590 comments, even be considerate of though I thought the clergyperson he was being a who has sacrilittle presumpficed to officituous. Annie’s Mailbox ate at your wedThis sumding. You would mer, I told Don that I not hesitate to pay the had invited another limo driver or the stylfriend and his wife to ist who does your hair. join me at the summer- Be sure to budget a house. His response decent amount for the was that all of us could cleric’s services, espego. Annie, even though cially if you know travel there’s enough room, I expenses are involved. want to have only this Thank you for letting 600 other couple. But all I me get this off my chest. could think to say to -- Pastor’s Wife in the Don was “maybe.” Northwest I’m guessing that Dear Wife: The his feelings are hurt, person who performs but I’m a little annoyed. the service should be What should I do? -- paid after the ceremony, Awkward in Idaho preferably in an enveDear Idaho: You do lope along with a note not owe Don an invi- of appreciation. Travel tation or an apology, expenses also should be nor are you responsible covered. Bridal couples for whatever assump- can inquire about the tions he has made about fee at the church or being entitled to stay at synagogue office. But if your beach house. Two your husband routinely invitations make you goes unpaid, he could a generous host, not be a bit more assertive his lifetime roommate. at the time he is asked Continue to be friendly to officiate by saying, with Don, but say noth- “Please call the church ing more about the sum- office about the fee.” mer place unless you Dear Annie: I can are ready to invite him identify with “Married again. This is not your to an Octopus.” I have fault. been married for 30 Dear Annie: My years and grabbed husband, a pastor, was for most of them. asked to perform the Explaining that this was wedding of our son’s more of an assault and friend and his bride. This an embarrassment rather Auto Repairs/ involved two trips out of than a form of affection 810 Parts/Acc. town. For the wedding, fell on deaf ears. we had to drive more Here’s what finally than 250 miles round- worked for me. I started Midwest Ohio trip, board our dog for grabbing him and saytwo days and pay for ing, “Does this feel Auto Parts our own motel room, nice?” I wasn’t rough, even though the bride but the mere threat to Specialist said they would take my husband’s manhood Windshields Installed, New care of it. The weekend finally drove home the Lights, Grills, Fenders,Mirrors, cost us $230. point that his octopus Hoods, Radiators This is my gripe: My hands were unpleasant. husband was not given I also would like to 4893 Dixie Hwy, Lima to 1-800-589-6830 a dime Ifor his services. suggest lack “Married” When mentioned to that her of interest him that in the future he in sex may be less about might make it a condi- her health and more 840 Mobile Homes tion of doing a wedding about a negative assothat his travel expenses ciation she has develRENT OR Rent to Own. 2 be covered, he shrugged oped with her husband’s bedroom, 1 bath mobile and said, “They prob- touch. -- Hands Off • Pet Food • Pet Supplies • Purina Feeds 419-339-6800 Writer’s friend likes beach house DAILY For a low, low price! ALLEN COUNTY REAL ESTATE TRANSFERS William D. Litsey et al. to Katharine A. Ulrich, 172 Hartford court, $31,000. Karen S. Lovett an Deborah L. Camper trustees et al. to Lori A. Laudick, 1457 George Bingham Drive, $166,000. Ronal H. and Marguerita J. Sprague to Kody W. and Kimberly A. Lapoint, 2785 Freyer Road, $149,500. Donald L. And Brenda B. Stevenson to Richard D. and Miriam R. Jueckstock, 2067 Augusta Drive, $160,000. William C. Timmerveister to Stephen D. Taylor Family Properties, 2100 N. Cable Road, $2,000,000. Estate of Janice A. Williams et al. and Sheriff Samuel A. Crish to Charles W. Clifford, 3545 W. Elm St., $42,000. City of Delphos Robert L. Baumgartner and Mary Hofmann executors et al. to Daniel A. Raines Jr. 229 Douglas $69,700. Bishop of the Roman Catholic Diocese of Toledo to Albert D. and Shawna Smith IV, 1006 Fort Jennings Road, $135,000. Brian M. And Cheryl A. Gossard to Kevin V. and Synthia K. Weitzel, 1451 Carolyn Drive, $250,000. Roger Luersman and Jane Rutledge executors et al. to Helen M. Fischer trustee et al, 455 E. Cleveland, $138,500. Daniel E. and Barbara A. Smith to Brian K and Amy J. Strayer, 441 E. Cleveland St., $133,000. S 950 Car Care Or send qualifications by mail to: AAP St. Marys Corporation 1100 McKinley Road St. Marys, Ohio 45885 Attention: Human Resource-DH ervice POHLMAN BUILDERS ROOM ADDITIONS GARAGES • SIDING • ROOFING BACKHOE & DUMP TRUCK SERVICE FREE ESTIMATES FULLY INSURED AT YOUR American Township Richard e. and Beverly S. Breneman to Kayla N. Gossman, 28795 Sherwood Drive, $104,900 Michael L. and Jane Bull to William J. Conway, Ill, 158 Hartford Court, $52,000 Omea Hicks to Kenneth R. Noble, eight lots on Powers Avenue, $15,000. Miscellaneous SAFE & SOUND 950 Tree Service Geise Transmission, Inc. • automatic transmission • standard transmission • differentials • transfer case • brakes & tune up 2 miles north of Ottoville TEMAN’S OUR TREE SERVICE • Trimming • Topping • Thinning • Deadwooding Stump, Shrub & Tree Removal Since 1973 Mark Pohlman 419-339-9084 cell 419-233-9460 SELF-STORAGE Security Fence •Pass Code •Lighted Lot •Affordable •2 Locations Why settle for less? DELPHOS 419-692-7261 Bill Teman 419-302-2981 Ernie Teman 419-230-4890 419-453-3620 950 Construction 950 Home Improvement 419-692-6336 Amish Crew Needing work Roofing • Remodeling Bathrooms • Kitchens Hog Barns • Drywall Additions • Sidewalks Concrete • etc. FREE ESTIMATES COMMUNITY Hohlbein’s SELF-STORAGE Home Improvement Windows, Doors, Siding, Roofing, Sunrooms, Kitchens & Bathroom Remodeling, Pole Buildings, Garages Ph. 419-339-4938 or 419-230-8128 GREAT RATES NEWER FACILITY Across from Arby’s L.L.C. • Trimming & Removal • Stump Grinding • 24 Hour Service • Fully Insured KEVIN M. MOORE 419-692-0032 (419) 235-8051 950 Welding Quality Fabrication & Welding Inc. 419-733-9601 POHLMAN POURED CONCRETE WALLS Residential & Commercial • Agricultural Needs • All Concrete Work Delphos Herald Customer Service Hotline 419-695-0015 Please call if extension 126 Joe Miller Construction Experienced Amish Carpentry Roofing, remodeling, concrete, pole barns, garages or any construction needs. Cell 419-339-0110 GENERAL REPAIR - SPECIAL BUILT PRODUCTS TRUCKS, TRAILERS FARM MACHINERY RAILINGS & METAL GATES CARBON STE EL S T AINL E S S S T E E L ALUMIN UM Send qualifications by mail to: AAP St. Marys Corporation 1100 McKinley Road St. Marys, Ohio 45885 Attention: Human Resource-CG Mark Pohlman 419-339-9084 cell 419-233-9460 567-644-6030 • You would like to order home delivery. • Your paper has not arrived by 5 p.m. Monday-Friday; 8 a.m. Saturday. • Your paper is damaged. • You have a problem with a newsrack. • You are going on vacation. • You have questions about your subscription. Larry McClure 5745 Redd Rd., Delphos We want to ensure your satisfaction. Saturday, September 1, 2012 The Herald – 11 HI AND LOIS BLONDIE TAKE US ALONG! SUBSCRIPTION FORWARDING 419-695-0015 Saturday Evening WLIO/NBC America's Got Talent WOHL/FOX College Football WPTA/ABC College Football WHIO/CBS CSI: NY 8:00 8:30 9:00 9:30 10:00 10:30 11:00 Local Local Touch Psych September 1, 2012 Local 11:30 12:00 12:30 48 Hours Mystery Saving Hope Psych 48 Hours Mystery Law & Order: SVU Psych Cable Channels A&E ION Psych Saturday Night Live Biased Psych Local Hatfields & McCoys AMC Four Brothers ANIM My Cat From Hell BET The Game The Game BRAVO Meet the Parents CMT Police Academy CNN 41 COMEDY Harold Half Baked DISC Yukon Men DISN Shake It Up! E! Georgia Rule ESPN College Football ESPN2 NASCAR Racing FAM Toy Story FOOD Diners Diners FX UFC 151 Prelims HGTV Love It or List It Tanked Madea's Family Redneck Vacation Gold Rush Hatfields & McCoys Four Brothers Tanked Tanked Hatfields & McCoys BEETLE BAILEY Shake It Diners Diners Love It or List It Tanked A Fool and His Money Meet the Parents Orange County Redneck Vacation Redneck Vacation Redneck Vacation CNN Newsroom 41 Dumb & Dumber Gold Rush Gold Rush Gold Rush Vampire Vampire ANT Farm ANT Farm Wizards Wizards Kardashian The Soup Chelsea Jonas Jonas Score College Football SportsCenter SportsCenter SportsCenter The Goonies Prince Diners Diners Iron Chef America Diners Diners Two Men Two Men Wilfred Biased Louie Wilfred Hunters Hunt Intl Hunters Hunt Intl Love It or List It SNUFFY SMITH Premium Channels HBO MAX SHOW Pawn Pawn Family That Preys MTV The Hills The Hills NICK How to Rock SCI Outlander SPIKE Star Wars IV TBS Big Bang Big Bang TCM The Band Wagon TLC Dateline: Real Life TNT Ocean's Eleven TOON Percy Jackson TRAV Ghost Adventures TV LAND Beauty Shop USA Bad Boys II VH1 Single Ladies WGN MLB Baseball HIST LIFE Pawn Pawn Pawn The Hills The Hills You Gotta iCarly Predator 2 Rush Hour 3 Dateline: Real Life Home Mov. King/Hill Ghost Adventures Raymond Raymond Love, Hip Hop Pawn Pawn Pawn Pawn Pawn Prank Mom Prank Mom Prank Mom Family That Preys The Hills The Hills The Hills The Hills The Hills The Hills Yes, Dear Yes, Dear Friends Friends Friends Friends Serenity Swordfish Blue Streak Barkleys-Brdwy Humoresque Dateline: Real Life Dateline: Real Life Dateline: Real Life Sherlock Holmes King/Hill Fam. Guy Dynamite Boondocks Bleach Samurai 7 Ghost Adventures Ghost Adventures Ghost Adventures Raymond Raymond Raymond King King King Covert Affairs Next Friday Romeo Must Die Planet WGN News at Nine Funniest Home Videos Chris Chris HAGAR THE HORRIBLE Sunday Evening 8:00 Hop Knight and Day Our Idiot Brother Strike Back Boxing Troy The Mechanic Weeds ©2009 Hometown Content, listings by Zap2it Episodes Cable Channels A&E AMC Duos The Good Wife The Mentalist WLIO/NBC America's Got Talent America's Got Talent WOHL/FOX Simpsons Simpsons Fam. Guy Fam. Guy Local ION ET Extra-Terr. Leverage Storage Storage Into the West ANIM Off Hook Off Hook BET Sunday Best BRAVO Housewives/NJ CMT Delta Farce CNN Teddy: In His COMEDY Dumb Tosh.0 DISC Survivorman Ten Days DISN Code 9 Jessie E! Kardashian ESPN NASCAR Racing ESPN2 MLB Baseball FAM Toy Story 2 FOOD Cupcake Wars FX Iron Man HGTV Buying and Selling Storage Storage Hell on Wheels Mermaids-Body Sunday Best Housewives/NJ Ron White: They Call Tosh.0 Tosh.0 One Car Too Far Jessie Jessie Kardashian WPTA/ABC Once Upon a Time WHIO/CBS Big Brother 8:30 9:00 9:30 10:00 10:30 Local Local Local 11:00 September 2, 2012 11:30 12:00 12:30 Dateline NBC Leverage Storage Hell on Inspir. Gallery Ron White Gigolos L Word Leverage Storage Storage Breaking Bad Sunday Best New Jersey Social Delta Farce CNN Newsroom Tosh.0 Tosh.0 Yukon Men Vampire Vampire Jonas Kardashian Toy Story 2 Iron Chef America Iron Man Handyman Storage Storage Storage Town Breaking Bad Mermaids-Body Sunday Best Popoff TBA Housewives/NYC BORN LOSER Food Truck Race Property Brothers Teddy: In His Tosh.0 Futurama One Car Too Far MythBusters Austin Austin ANT Farm Vampire Jonas Kardashian SportsCenter SportCtr First NFL Yrbk. NASCAR J. Osteen Ed Young Restaurant Stakeout Food Truck Race Holmes Inspection Property Brothers Premium Channels HBO MAX SHOW Mountain Men LIFE Officer Murder MTV The Hills The Hills NICK My Wife My Wife SCI Prince Caspian SPIKE Star Wars V TBS Talladega Nights: TCM Hands of a Stranger TLC Hoard-Buried TNT Ocean's Eleven TOON Dragons: Riders TRAV Man v Fd Man v Fd TV LAND M*A*S*H: Reunion USA Law & Order: SVU VH1 Big Ang Big Ang WGN MDA Show of Strength HIST Mountain Men Drew Peterson Awkward. Awkward. George George Shark Wranglers Mountain Men Officer Murder One Dir. The Inbet Snooki Snooki Snooki Snooki Yes, Dear Yes, Dear Friends Friends Friends Friends Land of the Lost Band of Brothers Brothers Talladega Nights: Sullivan I Love The Beast With Five Fingers Mad Love Hoard-Buried Hoard-Buried Hoard-Buried Hoard-Buried Leverage Leverage Men in Black II Venture King/Hill King/Hill Fam. Guy Fam. Guy Black Dynamite Man v Fd Man v Fd Tailgate Paradise Hamburger Paradise Man v Fd Man v Fd Raymond Raymond Raymond Raymond Raymond King King King Law & Order: SVU Law & Order: SVU White Collar Law & Order: SVU Big Ang Hollywood Exes Single Ladies Big Ang Mama Drama WGN News at Nine Monk Fast Five Homeland Forrest Gump Weeds Web Ther. Weeds Very Harold & Kumar 3D Lingerie Web Ther. The Real L Word Mountain Men FRANK & ERNEST Monday Evening 8:00 Unknown Harry Potter-Azkaban Dexter BIG NATE ©2009 Hometown Content, listings by Zap2it Cable Channels A&E AMC WOHL/FOX Hotel Hell ION Criminal Minds WPTA/ABC Bachelor Pad WHIO/CBS How I Met Big Bang WLIO/NBC Stars Earn Stripes 8:30 9:00 9:30 2 Broke G Mike Criminal Minds Castle Hawaii Five-0 Grimm Local Criminal Minds 10:00 10:30 Local Local Local 11:00 September 3, 2012 11:30 12:00 12:30 Nightline Jimmy Kimmel Live Late Show Letterman Ferguson Tonight Show w/Leno J. Fallon Criminal Minds Criminal Minds Coma Storage Storage Coma Thunder-Light Joe Kidd ANIM Wildman Wildman Wildman Wildman Wildman Wildman BET The BET Awards 2012 BRAVO Housewives/NYC Housewives/NYC Gallery Girls CMT Cowboys Cheerleaders Cowboys Cheerleaders Cowboys Cheerleaders CNN Obama Revealed: Man, President Countdown Convention COMEDY Jeff Dunham: Arguing Jeff Dunham: Spark of Insanity DISC American Chopper American Chopper Fast N' Loud DISN Shake It Up! Shake It Vampire Phineas E! Kardashian Kardashian Jonas Jonas ESPN College Football ESPN2 '12 U.S. Open FAM Switched at Birth Sweet Home Alabama FOOD Diners Diners Diners Diners Diners Diners FX Two Men Two Men Two Men Two Men Two Men Two Men HGTV Love It or List It Love It or List It Hunters Hunt Intl Coma Thunderbo Wildman Wildman Wildman Wildman Wendy Williams Show Housewives/NYC Housewives/NYC Cowboys Cheerleaders Cowboys Cheerleaders Obama Revealed: Man, President Conventn. Jeff Dunham: Arguing Tosh.0 Tosh.0 American Chopper Fast N' Loud Jessie ANT Farm Wizards Wizards Chelsea Jonas Jonas Chelsea SportsCenter SportsCenter SportsCenter Baseball Tonight The 700 Club Prince Prince Diners Diners Diners Diners Two Men Two Men Death at a Funeral Love It or List It Love It or List It GRIZZWELLS Premium Channels HBO MAX SHOW Pawn Pawn American Pickers Family That Preys MTV Ridic. Ridic. Ridic. Top 10 NICK ET Extra-Terr. SCI Blade II Daybreakers SPIKE Jurassic Park III Jurassic Park TBS Fam. Guy Fam. Guy Fam. Guy Fam. Guy TCM The Palm Beach Story TLC Little People Bates Bates TNT Major Crimes Major Crimes TOON Regular Annoying King/Hill King/Hill TRAV No Reservation No Reservation TV LAND The Exes The Exes Raymond Raymond USA WWE Monday Night RAW VH1 Love, Hip Hop T.I.-Tiny T.I.-Tiny WGN Funniest Home Videos Funniest Home Videos HIST LIFE Pawn Pawn Cnt. Cars Cnt. Cars Pawn Pawn Prank Mom Prank Mom Prank Mom Family That Preys Ridic. The Inbet WakeBros Guy Code Comedy Top 10 George George Friends Friends Friends Friends Stake Land Jurassic Park III Fam. Guy Fam. Guy Conan Office Office My Brilliant Career Shadows in Paradise 19 Kids and Counting Little People Bates Bates Perception Major Crimes Perception Amer. Dad Amer. Dad Fam. Guy Fam. Guy Chicken Squid No Reservation No Reservation No Reservation Raymond Raymond King King King King Fast & Furious Love, Hip Hop T.I.-Tiny T.I.-Tiny Love, Hip Hop WGN News at Nine Funniest Home Videos Chris Chris Hard Knocks 24/7 Strike Back Cowboys & Aliens Web Ther. Weeds Web Ther. Blitz Boxing PICKLES Real Time/Bill Maher Strike Back Twil: Eclipse The Dilemma Strike Ba Strike Back Weeds ©2009 Hometown Content, listings by Zap2it 12 – The Herald Saturday, September 1, 2012 Today •9 a.m. 5K. Fun Run/Walk •11 a.m. Lunch/Concession Stand •Adult Wiffle Ball •Corn Hole Registration •$10 All Day Ultra Sound Rides •X-treme Trampoline •Noon OSU Tailgate Party (OSU vs. Miami) •Antique Tractor Show •12:30 p.m. Corn Hole Tournament •2 p.m. Barnyard Games •Kids Alley - Ring Toss •Plinko and General Store •Raffle Booth and Baked goods •2-6 p.m. $10 for a 4-Hour Wristband for rides by D&D •3 p.m. Texas Hold ‘Em •4 p.m. Kids Big Wheel Races •Wing Cook Off •Money wheel •4 p.m. BBQ Chicken Dinner by BBQ Express •7 p.m. Wing Cook-Off Awards OTTOVILLE PARK CARNIVAL SCHEDULE •Lip Sync Contest •8 p.m. Pong-A-Long Tournament 8 to 11 p.m. 50/60s Dance 9 p.m. Free Outdoor Kids Movie with free popcorn. DJ-Ultra Sound Sunday, Sept. 2 •9a.m. Volleyball Tournament •11 a.m. Lunch/ Concession Stands •BBQ Chicken Dinners by BBQ Express •Kids Alley, Raffle Booth •Baked goods Noon Pam’s School of Dance 12 p.m. Crowning of King and Queen & Miniature King and Queen 1:30 to 5:30 p.m. $10 for a 4-Hour •Wristband for rides by D&D 1 p.m.: 50th ANNUAL PARK CARNIVAL PARADE •2 p.m. Helicopter Rides, •Money Wheel •$10 All Day Ultra Sound Rides •Toledo Zoo, Golf Challenge, Art Space, •Bingo, Adut Wiffle Ball •X-Treme Trampoline •Brass Notes playing at the Beer Tent •2:30 p.m. Cub Scout Tractor Pull •4 p.m. Tractor Square Dancing •6 p.m. Cow Paddy Bingo •7 p.m. Tractor Square Dancing •Kids Wiffle Ball Home Run Derby •8 p.m. Raffle Booth Drawing •9 to Midnight. Polly Mae •10 p.m. to 2 a.m. Free Taxi Rides Home Come join the fun in Ottoville! Area children hunt around for prizes during one of the barnyard-oriented children’s activities Friday night in Ottoville.
https://www.scribd.com/document/104587306/DH-0901
CC-MAIN-2017-39
refinedweb
21,387
72.05
SECOND DIVISION G.R. No. 179962, June 11, 2014 DR. JOEL C. MENDEZ, Petitioner, v. PEOPLE OF THE PHILIPPINES AND COURT OF TAX APPEALS, Respondents. D E C I S I O N BRION, J.: Based on these operations, the BIR alleged that petitioner failed to file his income tax returns for taxable years 2001 to 2003 and, consequently evaded his obligation to pay the correct amount of taxes due the government.6cralawredBased on these operations, the BIR alleged that petitioner failed to file his income tax returns for taxable years 2001 to 2003 and, consequently evaded his obligation to pay the correct amount of taxes due the government.6cralawred - Mendez Body and Face Salon and Spa Registered with Revenue District Office (RDO) No. 39 – South Quezon City - Mendez Body and Face Salon and Spa Registered with RDO No. 39 – South Quezon City - Mendez Body and Face Salon and Spa Registered with RDO No. 40 – Cubao - Mendez Body and Face Skin Clinic Registered with RDO No. 47 – East Makati - Weigh Less Center Registered with RDO No. 21 - Mendez Weigh Less Center Registered with RDO No. 4 – Calasiao Pangasinan That on or about the 15th day of April, 2002, at Quezon City, and within the jurisdiction of [the CTA] the above named accused, a duly registered taxpayer, and sole proprietor of “Weigh Less Center” with principal office at No. 31 Roces Avenue, Quezon City, and with several branches in Quezon City, Makati, San Fernando and Dagupan City, did then and there, wilfully, unlawfully and feloniously fail to file his Income Tax Return (ITR) with the Bureau of Internal Revenue for the taxable year 2001, to the damage and prejudice of the Government in the estimated amount of P1,089,439.08, exclusive of penalties, surcharges and interest.The accused was arraigned11 and pleaded not guilty on March 5, 2007.12 On May 4, 2007, the prosecution filed a “Motion to Amend Information with Leave of Court.”13 The amended information reads:ChanRoblesVirtualawlibrary CONTRARY TO LAW.10 That on or about the 15th day of April, 2002, at Quezon City, and within the jurisdiction of [the CTA] the above named accused, doing business under the name and style of “Weigh Less Center”/Mendez Medical Group”, with several branches in Quezon City, Muntinlupa City, Mandaluyong City and Makati City, did then and there, wilfully, unlawfully and feloniously fail to file his income tax return (ITR) with the Bureau of Internal Revenue for income earned for the taxable year 2001, to the damage and prejudice of the Government in the estimated amount of P1,089,439.08, exclusive of penalties, surcharges and interest (underscoring and boldfacing in the original).14The petitioner failed to file his comment to the motion within the required period; thus on June 12, 2007, the CTA First Division granted the prosecution’s motion.15 The CTA ruled that the prosecution’s amendment is merely a formal one as it “merely states with additional precision something already contained in the original information.”16 The petitioner failed to show that the defenses applicable under the original information can no longer be used under the amended information since both the original and the amended information charges the petitioner with the same offense (violation of Section 255). The CTA observed:ChanRoblesVirtualawlibrary the change in the name of his business to include the phrase “Mendez Medical Group” does not alter the fact the [petitioner] is being charged with failure to file his Income Tax Return... The change in the branches of his business, likewise did not relieve [the petitioner] of his duty to file an ITR. In addition, the places where the accused conducts business does not affect the Court’s jurisdiction... nor ... change the nature of the offense charged, as only one [ITR] is demanded of every taxpayer. We likewise see no substantial difference on the information with the insertion of the phrase ‘for income earned’ for it merely stated the normal subject matter found in every income tax return.The petitioner filed the present petition after the CTA denied his motion for reconsideration.17cralawred.There is no precise definition of what constitutes a substantial amendment. According to jurisprudence, substantial matters in the complaint or information consist of the recital of facts constituting the offense charged and determinative of the jurisdiction of the court.23 Under Section 14, however, the prosecution is given the right to amend the information, regardless of the nature of the amendment, so long as the amendment is sought before the accused enters his plea, subject to the qualification under the second paragraph of Section 14.. Section any correct and accurate information, who wilfully10,000) and suffer imprisonment of not less than one (1) year but not more than ten (10) years. [emphasis supplied]Since the petitioner operates as a sole proprietor from taxable years 2001 to 2003, the petitioner should have filed a consolidated return in his principal place of business, regardless of the number and location of his other branches. Consequently, we cannot but agree with the CTA that the change and/or addition of the branches of the petitioner’s operation in the information does not constitute substantial amendment because it does not change the prosecution’s theory that the petitioner failed to file his income tax return. [c]ause[d] undue injury by illegally dismissing from the service [several government] employees, xxx to their damage and prejudice amounting to P1,606,788.50 by way of unpaid salaries during the period when they have been illegally terminated including salary differentials and other benefits.32The accused moved to dismiss the amended information for charging an entirely new cause of action and asked for preliminary investigation on this new charge of illegal dismissal. Endnotes: * Designated as additional member in lieu of Associate Justice Estela M. Perlas-Bernabe per Raffle dated June 9, 2014. 1 Under the Rules of Court. 2Rollo, pp. 23-27 and 33-36, respectively. 3 In CTA Crim. Case No. 0-014. 4 Records, pp. 14-20. 5 Id. at 44-45. 6 Id. at 4-5. 7 Id. at 44-45. 8 Sections 254, 255, 257, and 267, in relation with Sections 51(A)(1)(a), 56(a)(1) and 74(A) of the NIRC. 9 Dated October 10, 2005, Records, Vol. 1 p. 1. Two other informations were filed against the petitioner based on the same facts docketed as C.T.A. CRIM. NOS. 0-013 & 0-015. 10 Records, p. 327. 11 The CTA initially dismissed without prejudice the information for lack of probable cause (Id. at 167-173) but on motion for reconsideration, the CTA (id. at 190-214) CTA reinstated the information on August 22, 2006 (id. at 271-273). 12 Id. at 326. 13Rollo, pp. 54-56; Id. at 484-486. 14 Records, p. 485. 15 Id. at 492-496, with Justice Caesar Cassanova Dissented, pp. 497-501. 16Rollo, p. 25. 17 Id. at 41. 18 Citing in petitioner’s Reply Matalam v. The 2nd Division of the Sandiganbayan, 495 Phil. 664 , 675 (2005). 19 Citing in his Reply, People v. Labatete, 107 Phil. 697 (1960). 20 Memorandum; rollo, p. 133. 21 Citing People v. Casey (1958). 22 This provision reads: SEC. 9. Appeal; period to appeal. – xxx . The Court may, for good cause, extend the time for filing of the petition for review for an additional period not exceeding fifteen days. 23Almeda v. Judge Villaluz, 160 Phil. 750, 757 (1975). 24 See People v. Hon. Montenegro, 242 Phil. 655, 661 (1988). 25 160 Phil. 750 (1975). 26 G.R. No. 103102, March 6, 1992, 207 SCRA 134. 27Guinto v. Veluz, 77 Phil. 801 (1946). 28 Even the Dissenting Opinion of Justice Cassanova (which the petitioner relies upon) correctly cited the alleged date of commission of offense as “15th day of April 2002...” and yet the petitioner insists that “this [referring to the year 2002] should have been 2001.” (Records, p. 547; rollo, p. 12) 29 Section 51 A 1(a), 2(a) and 4(a). 30 Section 51 C. 31 G.R. No. 165751, April 12, 2005, 455 SCRA 736. 32 Id. at 740. 33 Id. at 749. 34 Citing 2 CJS Sec. 240, pp. 1249-1250. 35 In Montenegro, the accused were charged with “robbery” as accessories after the fact. The prosecution sought to amend the information to (i) charge “robbery in an uninhabited place” instead; and (ii) delete all items and articles allegedly stolen in the original information and substituting them with a different set of items. The Court disallowed the amendment for being substantial. The Court said that changing the items affects the essence of the imputed crime, and would deprive the accused of the opportunity to meet all the allegations in the amended information, in the preparation of their defenses to the charge filed against them. In this case, in fact, the principal in the crime of robbery had been earlier convicted for taking the same items alleged in the information against the accused. 36 The Court took into account the fact that the first cause of action is related to, and arose from, the second cause of action as this circumstance would ordinarily negate the need for a new preliminary investigation. However, since it was not shown that the accused had already touched the issue of evident bad faith or manifest partiality in the preliminary investigation as to the alleged illegal dismissal, the Court ordered that the accused be given opportunity to thoroughly adduce evidence on the matter. 37 Per petitioner’s own petition, he indicated his address as follows: No. 31-G Roces Avenue, Quezon City. 38People v. Casey, No. L-30146, February 24, 1981, 103 SCRA 21. 39Juasing Hardware v. Hon. Mendoza, etc., et al., 201 Phil. 369 (1982); and Mangila v. Court of Appeals, 435 Phil. 870 (2002). 40 Records, Volume 1, pp. 144-149. In fact, in the certification issued by the Philippine Star in connection with petitioner’s paid advertisements, it confirmed the prosecution’s position when it stated that petitioner requested it “to advertise his businesses in the names of Weighless Center/Body and Face by Mendez/Mendez Medical Group” (Id. at 219).
http://lawlibrary.chanrobles.com/index.php?option=com_content&view=article&id=83172:57143&catid=1584&Itemid=566
CC-MAIN-2017-47
refinedweb
1,696
62.48
. You might be thinking by now that we will be talking about the Either type in Scala next to achieve the composition of methods. Although we can do that but its quite different then the desired functionality and we would not use it here due to the following 2 reasons : - Either type works when you are explicitly defining what will be kept within Left (generally depicting failure) or Right (generally depicting success), i.e. your code would determine what kind of exception message will be sent in the Left instance and what return value you want in Right instance. - In order to define the correct message for the exception that might occur, you might want another try-catch block for different exception cases. Instead of using try-catch blocks all over the code, we should rather work with scala.util.Try to handle all the exceptions in the code. As it is easier to implement, compose and transform. We would see the usage of Try in this block but we are only looking at the ways of handling exceptions in methods that are sequential in nature. If we need to work on concurrent methods that may run on different threads simultaneously, Futures in Scala provides direct exception handling support via recover. Instead of wrapping your code in a try-catch block, just wrap it within a Try block. val result: Try[Int] = Try(riskyCodeInvoked("Exception Expected in certain cases")) result match { case Success(res) => info("Operation Was successful") case Failure(ex: ArithmeticException) => error("ArithmeticException occurred", ex) case Failure(ex) => error("Some Exception occurred", ex) } Now to explore further the advantages to use Try instead of try-catch block consider the following example : def riskyCodeInvoked(input: String): Int = ??? def anotherRiskyMethod(firstOutput: Int): String = ??? def yetAnotherRiskyMethod(secondOutput: String): Try[String] = ??? val result: Try[String] = Try(riskyCodeInvoked("Exception Expected in certain cases")) .map(anotherRiskyMethod(_)) .flatMap(yetAnotherRiskyMethod(_)) result match { case Success(res) => info("Operation Was successful") case Failure(ex: ArithmeticException) => error("ArithmeticException occurred", ex) case Failure(ex) => error("Some Exception occurred", ex) } In the above code we are composing different methods together using transformations on Try type. It removes all the boilerplate try-catch blocks and also allows you to handle all exceptions at one place. If the method riskyCodeInvoked is causing an exception control will not go to anotherRiskyMethod and so forth. If you are more comfortable with Option type, you can directly use Try like this : val result: Try[String] = ??? val resultOpt: Option[String] = result.toOption resultOpt.map{case res: String => info("Operation Was successful")} There are other methods also which are defined on Try type, like recover, getOrElse, orElse, isSuccess, isFailure, Filter etc. Once you get used of it, you might not want to use try-catch blocks ever 😉 so try it out in your code and enjoy the simplicity of Scala. Reblogged this on Anurag Srivastava. Reblogged this on Bharat Singh and commented: `Try` is far better than `try` in Scala. Let’s follow the best practices in our coding style to improve our application’s efficiency and fault tolerance. Thank you for the this blog post. Very nice. I have one point. val result: Try[String] = Try(riskyCodeInvoked(“Exception Expected in certain cases”)) .map(anotherRiskyMethod(_)) .flatMap(yetAnotherRiskyMethod(_)) Am I right thinking that .map shouldn’t use anotherRiskyMethod? Risky methods should return Try and these are chained via flatMap to get rid of Try nesting. It can even mislead reader thinking that anotherRiskyMethod is risky because it throws, which would be an anti-pattern. Hi petr, thanks for the question. From the sample code in this blog, what I really meant to show was that you could compose methods using map(), flatMap() and filter() methods. The usage is similar to what you might use with List or Map collection types. if `anotherRiskyMethod()` throws an exception it would still be handled by the enclosing Try as we are having a map in between the two methods. It is not necessary for you to write methods returning a Try type itself if you are composing them with other methods of Try type such as `riskyCodeInvoked()` in the sample. Although you are right that the best practice would be to use Try as return type for all the risky methods and use flatMap or for comprehension to compose these methods together.
https://blog.knoldus.com/2017/04/10/try-handling-exceptions-with-grace-in-scala-functional-way-with-scala/
CC-MAIN-2017-26
refinedweb
722
53.1
structs is illustrated in various examples; the concept of a classis introduced; casting is covered in detail; many new types are introduced and several important notational extensions to C are discussed. constis part of the C grammar, its use is more important and much more common and strictly used in C++ than it is in C. The const keyword is a modifier stating that the value of a variable or of an argument may not be modified. In the following example the intent is to change the value of a variable ival, which fails: int main() { int const ival = 3; // a constant int // initialized to 3 ival = 4; // assignment produces // an error message }This example shows how ivalmay be initialized to a given value in its definition; attempts to change the value later (in an assignment) are not permitted. Variables that are declared const can, in contrast to C, be used to specify the size of an array, as in the following example: int const size = 20; char buf[size]; // 20 chars bigAnother use of the keyword constis seen in the declaration of pointers, e.g., in pointer-arguments. In the declaration char const *buf; bufis a pointer variable pointing to chars. Whatever is pointed to by bufmay not be changed through buf: the chars are declared as const. The pointer bufitself however may be changed. A statement like *buf = 'a';is therefore not allowed, while ++bufis. In the declaration char *const buf; bufitself is a constpointer which may not be changed. Whatever chars are pointed to by bufmay be changed at will. Finally, the declaration char const *const buf;is also possible; here, neither the pointer nor what it points to may be changed. The rule of thumb for the placement of the keyword const is the following: whatever occurs to the left to the keyword may not be changed. Although simple, this rule of thumb is often used. For example, Bjarne Stroustrup states (in): Should I put "const" before or after the type?But we've already seen an example where applying this simple `before' placement rule for the keyword I put it before, but that's a matter of taste. "const T" and "T const" were always (both) allowed and equivalent. For example:const int a = 1; // OK int const b = 2; // also OKMy guess is that using the first version will confuse fewer programmers (``is more idiomatic''). constproduces unexpected (i.e., unwanted) results as we will shortly see (below). Furthermore, the `idiomatic' before-placement also conflicts with the notion of const functions, which we will encounter in section 7.7. With const functions the keyword constis also placed behind rather than before the name of the function. The definition or declaration (either or not containing const) should always be read from the variable or function identifier back to the type indentifier: ``Buf is a const pointer to const characters''This rule of thumb is especially useful in cases where confusion may occur. In examples of C++ code published in other places one often encounters the reverse: constpreceding what should not be altered. That this may result in sloppy code is indicated by our second example above: char const *buf;What must remain constant here? According to the sloppy interpretation, the pointer cannot be altered (as constprecedes the pointer). In fact, the char values are the constant entities here, as becomes clear when we try to compile the following program: int main() { char const *buf = "hello"; ++buf; // accepted by the compiler *buf = 'u'; // rejected by the compiler }Compilation fails on the statement *buf = 'u';and not on the statement ++buf. Marshall Cline's C++ FAQ gives the same rule (paragraph 18.5) , in a similar context: [18.5] What's the difference between "const Fred* p", "Fred* const p" and "const Fred* const p"?Marshall Cline's advice can be improved, though. Here's a recipe that will effortlessly dissect even the most complex declaration: You have to read pointer declarations right-to-left. char const *(* const (*(*ip)())[])[] ip Start at the variable's name: 'ip' is ip) Hitting a closing paren: revert --> (*ip) Find the matching open paren: <- 'a pointer to' (*ip)()) The next unmatched closing par: --> 'a function (not expecting arguments)' (*(*ip)()) Find the matching open paren: <- 'returning a pointer to' (*(*ip)())[]) The next closing par: --> 'an array of' (* const (*(*ip)())[]) Find the matching open paren: <-------- 'const pointers to' (* const (*(*ip)())[])[] Read until the end: -> 'an array of' char const *(* const (*(*ip)())[])[] Read backwards what's left: <----------- 'pointers to const chars'Collecting all the parts, we get for char const *(* const (*(*ip)())[])[]: ip is a pointer to a function (not expecting arguments), returning a pointer to an array of const pointers to an array of pointers to const chars. This is what iprepresents; the recipe can be used to parse any declaration you ever encounter. sinoperating on degrees, but does not want to lose the capability of using the standard sinfunction, operating on radians. Namespaces are covered extensively in chapter 4. For now it should be noted that most compilers require the explicit declaration of a standard namespace: std. So, unless otherwise indicated, it is stressed that all examples in the Annotations now implicitly use the using namespace std;declaration. So, if you actually intend to compile examples given in the C++ Annotations, make sure that the sources start with the above usingdeclaration. ::). This operator can be used in situations where a global variable exists having the same name as a local variable: #include <stdio.h> double counter = 50; // global variable int main() { for (int counter = 1; // this refers to the counter != 10; // local variable ++counter) { printf("%d\n", ::counter // global variable / // divided by counter); // local variable } }In the above program the scope operator is used to address a global variable instead of the local variable having the same name. In C++ the scope operator is used extensively, but it is seldom used to reach a global variable shadowed by an identically named local variable. Its main purpose is encountered in chapter 7. cout, analogous to stdout, cin, analogous to stdin, cerr, analogous to stderr. #include <iostream> using namespace std; int main() { int ival; char sval[30]; cout << "Enter a number:\n"; cin >> ival; cout << "And now a string:\n"; cin >> sval; cout << "The number is: " << ival << "\n" "And the string is: " << sval << '\n'; }This program reads a number and a string from the cinstream (usually the keyboard) and prints these data to cout. With respect to streams, please note: iostream. In the examples in the C++ Annotations this header file is often not mentioned explicitly. Nonetheless, it must be included (either directly or indirectly) when these streams are used. Comparable to the use of the using namespace std;clause, the reader is expected to #include <iostream>with all the examples in which the standard streams are used. cout, cinand cerrare variables of so-called class-types. Such variables are commonly called objects. Classes are discussed in detail in chapter 7 and are used extensively in C++. cinextracts data from a stream and copies the extracted information to variables (e.g., ivalin the above example) using the extraction operator (two consecutive >characters: >>). Later in the Annotations we will describe how operators in C++ can perform quite different actions than what they are defined to do by the language, as is the case here. Function overloading has already been mentioned. In C++ operators can also have multiple definitions, which is called operator overloading. cin, coutand cerr(i.e., >> and <<) also manipulate variables of different types. In the above example cout<< ivalresults in the printing of an integer value, whereas cout<< "Enter a number"results in the printing of a string. The actions of the operators therefore depend on the types of supplied variables. "\n"or '\n'. But when inserting the endlsymbol the line is terminated followed by the flushing of the stream's internal buffer. Thus, endlcan usually be avoided in favor of '\n'resulting in somewhat more efficient code. cin, coutand cerrare not part of the C++ grammar proper. The streams are part of the definitions in the header file iostream. This is comparable to functions like printfthat are not part of the C grammar, but were originally written by people who considered such functions important and collected them in a run-time library. A program may still use the old-style functions like printf and scanf rather than the new-style streams. The two styles can even be mixed. But streams offer several clear advantages and in many C++ programs have completely replaced the old-style C functions. Some advantages of using streams are: printfand scanfcan define wrong format specifiers for their arguments, for which the compiler sometimes can't warn. In contrast, argument checking with cin, coutand cerris performed by the compiler. Consequently it isn't possible to err by providing an intargument in places where, according to the format string, a string argument should appear. With streams there are no format strings. printfand scanf(and other functions using format strings) in fact implement a mini-language which is interpreted at run-time. In contrast, with streams the C++ compiler knows exactly which in- or output action to perform given the arguments used. No mini-language here. printfcannot be extended. cin, coutand cerr. In chapter 6 iostreams are covered in greater detail. Even though printfand friends can still be used in C++ programs, streams have practically replaced the old-style C I/Ofunctions like printf. If you think you still need to use printfand related functions, think again: in that case you've probably not yet completely grasped the possibilities of stream objects. structs (see section 2.5.13). Such functions are called member functions. This section briefly discusses how to define such functions. The code fragment below shows a struct having data fields for a person's name and address. A function struct's definition: struct Person { char name[80]; char address[80]; void print(); };When defining the member function Person) and the scope resolution operator ( ::) are used: void Person::print() { cout << "Name: " << name << "\n" "Address: " << address << '\n'; }The implementation of Person::printshows how the fields of the structcan be accessed without using the structure's type name. Here the function Person::printprints a variable name. Since Person::printis itself a part of struct person, the variable nameimplicitly refers to the same type. This struct Person could be used as follows: Person person; strcpy(person.name, "Karel"); strcpy(person.address, "Marskramerstraat 33"); person.print();The advantage of member functions is that the called function automatically accesses the data fields of the structure for which it was invoked. In the statement person.print()the object personis the `substrate': the variables nameand addressthat are used in the code of personobject. C++ has three keywords that are related to data hiding: private, protected and public. These keywords can be used in the definition of structs. The keyword public allows all subsequent fields of a structure to be accessed by all code; the keyword private only allows code that is part of the struct itself to access subsequent fields. The keyword protected is discussed in chapter 13, and is somewhat outside of the scope of the current discussion. In a struct all fields are public, unless explicitly stated otherwise. Using this knowledge we can expand the struct Person: struct Person { private: char d_name[80]; char d_address[80]; public: void setName(char const *n); void setAddress(char const *a); void print(); char const *name(); char const *address(); };As the data fields d_nameand d_addressare in a privatesection they are only accessible to the member functions which are defined in the struct: these are the functions setName, setAddressetc.. As an illustration consider the following code: Person fbb; fbb.setName("Frank"); // OK, setName is public strcpy(fbb.d_name, "Knarf"); // error, x.d_name is privateData integrity is implemented as follows: the actual data of a struct Personare mentioned in the structure definition. The data are accessed by the outside world using special functions that are also part of the definition. These member functions control all traffic between the data fields and other parts of the program and are therefore also called `interface' functions. The thus implemented data hiding is illustrated in Figure 2. The members setNameand setAddressare declared with char const *parameters. This indicates that the functions will not alter the strings which are supplied as their arguments. Analogously, the members nameand addressreturn char const *s: the compiler prevents callers of those members from modifying the information made accessible through the return values of those members. Two examples of member functions of the struct Person are shown below: void Person::setName(char const *n) { strncpy(d_name, n, 79); d_name[79] = 0; } char const *Person::name() { return d_name; }The power of member functions and of the concept of data hiding results from the abilities of member functions to perform special tasks, e.g., checking the validity of the data. In the above example setNamecopies only up to 79 characters from its argument to the data member name, thereby avoiding a buffer overflow. Another illustration of the concept of data hiding is the following. As an alternative to member functions that keep their data in memory a library could be developed featuring member functions storing data on file. To convert a program storing Person structures in memory to one that stores the data on disk no special modifications are required. After recompilation and linking the program to a new library it is converted from storage in memory to storage on disk. This example illustrates a broader concept than data hiding; it illustrates encapsulation. Data hiding is a kind of encapsulation. Encapsulation in general results in reduced coupling of different sections of a program. This in turn greatly enhances reusability and maintainability of the resulting software. By having the structure encapsulate the actual storage medium the program using the structure becomes independent of the actual storage medium that is used. Though data hiding can be implemented using structs, more often (almost always) classes are used instead. A class is a kind of struct, except that a class uses private access by default, whereas structs use public access by default. The definition of a class Person is therefore identical to the one shown above, except for the fact that the keyword class has replaced struct while the initial private: clause can be omitted. Our typographic suggestion for class names (and other type names defined by the programmer) is to start with a capital character to be followed by the remainder of the type name using lower case letters (e.g., Person). struct, which then require a pointer to the structas one of their arguments. An imaginary C header file showing this concept is: /* definition of a struct PERSON This is C */ typedef struct { char name[80]; char address[80]; } PERSON; /* some functions to manipulate PERSON structs */ /* initialize fields with a name and address */ void initialize(PERSON *p, char const *nm, char const *adr); /* print information */ void print(PERSON const *p); /* etc.. */ In C++, the declarations of the involved functions are put inside the definition of the struct or class. The argument denoting which struct is involved is no longer needed. class Person { char d_name[80]; char d_address[80]; public: void initialize(char const *nm, char const *adr); void print(); // etc.. };In C++ the structparameter is not used. A C function call such as: PERSON x; initialize(&x, "some name", "some address");becomes in C++: Person x; x.initialize("some name", "some address"); int int_value; int &ref = int_value;In the above example a variable int_valueis defined. Subsequently a reference refis defined, which (due to its initialization) refers to the same memory location as int_value. In the definition of ref, the reference operator &indicates that refis not itself an intbut a reference to one. The two statements ++int_value; ++ref;have the same effect: they increment int_value's value. Whether that location is called int_valueor refdoes not matter. References serve an important function in C++ as a means to pass modifiable arguments to functions. E.g., in standard C, a function that increases the value of its argument by five and returning nothing needs a pointer parameter: void increase(int *valp) // expects a pointer { // to an int *valp += 5; } int main() { int x; increase(&x); // pass x's address }This construction can also be used in C++ but the same effect is also achieved using a reference: void increase(int &valr) // expects a reference { // to an int valr += 5; } int main() { int x; increase(x); // passed as reference }It is arguable whether code such as the above should be preferred over C's method, though. The statement increase (x)suggests that not xitself but a copy is passed. Yet the value of xchanges because of the way increase()is defined. However, references can also be used to pass objects that are only inspected (without the need for a copy or a const *) or to pass objects whose modification is an accepted side-effect of their use. In those cases using references are strongly preferred over existing alternatives like copy by value or passing pointers. Behind the scenes references are implemented using pointers. So, as far as the compiler is concerned references in C++ are just const pointers. With references, however, the programmer does not need to know or to bother about levels of indirection. An important distinction between plain pointers and references is of course that with references no indirection takes place. For example: extern int *ip; extern int &ir; ip = 0; // reassigns ip, now a 0-pointer ir = 0; // ir unchanged, the int variable it refers to // is now 0. In order to prevent confusion, we suggest to adhere to the following: void some_func(int val) { cout << val << '\n'; } int main() { int x; some_func(x); // a copy is passed } void by_pointer(int *valp) { *valp += 5; } void by_reference(string const &str) { cout << str; // no modification of str } int main () { int x = 7; by_pointer(&x); // a pointer is passed // x might be changed string str("hello"); by_reference(str); // str is not altered }References play an important role in cases where the argument is not changed by the function but where it is undesirable to copy the argument to initialize the parameter. Such a situation occurs when a large object is passed as argument, or is returned by the function. In these cases the copying operation tends to become a significant factor, as the entire object must be copied. In these cases references are preferred. If the argument isn't modified by the function, or if the caller shouldn't modify the returned information, the const keyword should be used. Consider the following example: struct Person // some large structure { char name[80]; char address[90]; double salary; }; Person person[50]; // database of persons // printperson expects a // reference to a structure // but won't change it void printperson (Person const &p) { cout << "Name: " << p.name << '\n' << "Address: " << p.address << '\n'; } // get a person by indexvalue Person const &personIdx(int index) { return person[index]; // a reference is returned, } // not a copy of person[index] int main() { Person boss; printperson (boss); // no pointer is passed, // so variable won't be // altered by the function printperson(personIdx(5)); // references, not copies // are passed here } References could result in extremely `ugly' code. A function may return a reference to a variable, as in the following example: int &func() { static int value; return value; }This allows the use of the following constructions: func() = 20; func() += func();It is probably superfluous to note that such constructions should normally not be used. Nonetheless, there are situations where it is useful to return a reference. We have actually already seen an example of this phenomenon in our previous discussion of streams. In a statement like cout<< "Hello"<< '\n';the insertion operator returns a reference to cout. So, in this statement first the "Hello"is inserted into cout, producing a reference to cout. Through this reference the '\n'is then inserted in the coutobject, again producing a reference to cout, which is then ignored. Several differences between pointers and references are pointed out in the next list below: int &ref; refrefer to? external. These references were initialized elsewhere. &is used with a reference, the expression yields the address of the variable to which the reference applies. In contrast, ordinary pointers are variables themselves, so the address of a pointer variable has nothing to do with the address of the variable pointed to. const &types. C++ introduces a new reference type called an rvalue reference, which is defined as typename &&. The name rvalue reference is derived from assignment statements, where the variable to the left of the assignment operator is called an lvalue and the expression to the right of the assignment operator is called an rvalue. Rvalues are often temporary, anonymous values, like values returned by functions. In this parlance the C++ reference should be considered an lvalue reference (using the notation typename &). They can be contrasted to rvalue references (using the notation typename &&). The key to understanding rvalue references is the concept of an anonymous variable. An anonymous variable has no name and this is the distinguishing feature for the compiler to associate it automatically with an rvalue reference if it has a choice. Before introducing some interesting constructions let's first have a look at some standard situations where lvalue references are used. The following function returns a temporary (anonymous) value: int intVal() { return 5; }Although intVal's return value can be assigned to an intvariable it requires copying, which might become prohibitive when a function does not return an intbut instead some large object. A reference or pointer cannot be used either to collect the anonymous return value as the return value won't survive beyond that. So the following is illegal (as noted by the compiler): int &ir = intVal(); // fails: refers to a temporary int const &ic = intVal(); // OK: immutable temporary int *ip = &intVal(); // fails: no lvalue available Apparently it is not possible to modify the temporary returned by intVal. But now consider these functions: void receive(int &value) // note: lvalue reference { cout << "int value parameter\n"; } void receive(int &&value) // note: rvalue reference { cout << "int R-value parameter\n"; }and let's call this function from main: int main() { receive(18); int value = 5; receive(value); receive(intVal()); }This program produces the following output: int R-value parameter int value parameter int R-value parameterThe program's output shows the compiler selecting receive(int &&value)in all cases where it receives an anonymous intas its argument. Note that this includes receive(18): a value 18 has no name and thus receive(int &&value)is called. Internally, it actually uses a temporary variable to store the 18, as is shown by the following example which modifies receive: void receive(int &&value) { ++value; cout << "int R-value parameter, now: " << value << '\n'; // displays 19 and 6, respectively. }Contrasting receive(int &value)with receive(int &&value)has nothing to do with int &valuenot being a const reference. If receive(int const &value)is used the same results are obtained. Bottom line: the compiler selects the overloaded function using the rvalue reference if the function is passed an anonymous value. The compiler runs into problems if void receive(int &value) is replaced by void receive(int value), though. When confronted with the choice between a value parameter and a reference parameter (either lvalue or rvalue) it cannot make a decision and reports an ambiguity. In practical contexts this is not a problem. Rvalue refences were added to the language in order to be able to distinguish the two forms of references: named values (for which lvalue references are used) and anonymous values (for which rvalue references are used). It is this distinction that allows the implementation of move semantics and perfect forwarding. At this point the concept of move semantics cannot yet fully be discussed (but see section 9.7 for a more thorough discussusion) but it is very well possible to illustrate the underlying ideas. Consider the situation where a function returns a struct Data containing a pointer to dynamically allocated characters. Moreover, the struct defines a member function copy(Data const &other) that takes another Data object and copies the other's data into the current object. The (partial) definition of the struct Data might look like this (To the observant reader: in this example the memory leak that results from using Data::copy() should be ignored): struct Data { char *text; size_t size; void copy(Data const &other) { text = strdup(other.text); size = strlen(text); } };Next, functions dataFactoryand mainare defined as follows: Data dataFactory(char const *txt) { Data ret = {strdup(txt), strlen(txt)}; return ret; } int main() { Data d1 = {strdup("hello"), strlen("hello")}; Data d2; d2.copy(d1); // 1 (see text) Data d3; d3.copy(dataFactory("hello")); // 2 }At (1) d2appropriately receives a copy of d1's text. But at (2) d3receives a copy of the text stored in the temporary returned by the dataFactoryfunction. As the temporary ceases to exist after the call to copy()two related and unpleasant consequences are observed: d3. Now d3copies the temporary's data which clearly is somewhat overdone. Dataobject is lost following the call to copy(). Unfortunately its dynamically allocated data is lost as well resulting in a memory leak. copymember with a member copy(Data &&other)the compiler is able to distinguish situations (1) and (2). It now calls the initial copy()member in situation (1) and the newly defined overloaded copy()member in situation (2): struct Data { char *text; size_t size; void copy(Data const &other) { text = strdup(other.text); } void copy(Data &&other) { text = other.text; other.text = 0; } };Note that the overloaded copy()function merely moves the other.textpointer to the current object's textpointer followed by reassigning 0 to other.text. Struct Datasuddenly has become move-aware and implements move semantics, removing the drawbacks of the previously shown approach: other.textdoesn't point to dynamically allocated memory anymore the memory leak is prevented. Historically, the C programming language distinguished between lvalues and rvalues. The terminology was based on assignment expressions, where the expression to the left of the assignment operator receives a value (e.g., it referred to a location in memory where a value could be written into, like a variable), while the expression to the right of the assignment operator only had to represent a value (it could be a temporary variable, a constant value or the value stored in a variable): lvalue = rvalue; C++ adds to this basic distinction several new ways of referring to expressions: lvalue: an lvalue in C++ has the same meaning as in C. It refers to a location where a value can be stored, like a variable, a reference to a variable, or a dereferenced pointer. xvalue: an xvalue indicates an expiring value. An expiring value refers to an object (cf. chapter 7) just before its lifetime ends. Such objects normally have to make sure that resources they own (like dynamically allocated memory) also cease to exist, but such resources may, just before the object's lifetime ends, be moved to another location, thus preventing their destruction. gvalue: a gvalue is a generalized lvalue. A generalized lvalue refers to anything that may receive a value. It is either an lvalue or an xvalue. prvalue: a prvalue is a pure rvalue: a literal value (like 1.2e3) or an immutable object (e.g., the value returned from a function returning a constant std::string(cf. chapter 5)). An expression's value is an xvalue if it is: .*(pointer-to-member) expression (cf. chapter 16) in which the left-hand side operand is an xvalue and the right-hand side operand is a pointer to a data member. Here is a small example. Consider this simple struct: struct Demo { int d_value; };In addition we have these function declarations and definitions: Demo &&operator+(Demo const &lhs, Demo const &rhs); Demo &&factory(); Demo demo; Demo &&rref = static_cast<Demo &&>(demo); Expressions like factory(); factory().d_value; static_cast<Demo &&>(demo); demo + demoare xvalues. However, the expression rref;is an lvalue. In many situations it's not particularly important to know what kind of gvalue or what kind of rvalue is actually used. In the C++ Annotations the term lhs (left hand side) is frequently used to indicate an operand that's written to the left of a binary operator, while the term rhs (right hand side) is frequently used to indicate an operand that's written to the right of a binary operator. Lhs and rhs operands could actually be gvalues (e.g., when representing ordinary variables), but they could also be prvalues (e.g., numeric values added together using the addition operator). Whether or not lhs and rhs operands are glvalues can always be determined from the context in which they are used. intvalues, thereby bypassing type safety. E.g., values of different enumeration types may be compared for (in)equality, albeit through a (static) type cast. Another problem with the current enum type is that their values are not restricted to the enum type name itself, but to the scope where the enumeration is defined. As a consequence, two enumerations having the same scope cannot have identical names. Such problems are solved by defining enum classes. An enum class can be defined as in the following example: enum class SafeEnum { NOT_OK, // 0, by implication OK = 10, MAYBE_OK // 11, by implication };Enum classes use intvalues by default, but the used value type can easily be changed using the : typenotation, as in: enum class CharEnum: unsigned char { NOT_OK, OK };To use a value defined in an enum class its enumeration name must be provided as well. E.g., OKis not defined, CharEnum::OKis. Using the data type specification (noting that it defaults to int) it is possible to use enum class forward declarations. E.g., enum Enum1; // Illegal: no size available enum Enum2: unsigned int; // Legal: explicitly declared type enum class Enum3; // Legal: default int type is used enum class Enum4: char; // Legal: explicitly declared type A sequence of symbols of a strongly typed enumeration can also be indicated in a switch using the ellipsis syntax, as shown in the next example: SafeEnum enumValue(); switch (enumValue()) { case SafeEnum::NOT_OK ... SafeEnum::OK: cout << "Status is known\n"; break; default: cout << "Status unknown\n"; break; } C++ extends this concept by introducing the type initializer_list<Type> where Type is replaced by the type name of the values used in the initializer list. Initializer lists in C++ are, like their counterparts in C, recursive, so they can also be used with multi-dimensional arrays, structs and classes. Before using the initializer_list the <initializer_list> header file must be included. Like in C, initializer lists consist of a list of values surrounded by curly braces. But unlike C, functions can define initializer list parameters. E.g., void values(std::initializer_list<int> iniValues) { }A function like valuescould be called as follows: values({2, 3, 5, 7, 11, 13});The initializer list appears as an argument which is a list of values surrounded by curly braces. Due to the recursive nature of initializer lists a two-dimensional series of values can also be passes, as shown in the next example: void values2(std::initializer_list<std::initializer_list<int>> iniValues) {} values2({{1, 2}, {2, 3}, {3, 5}, {4, 7}, {5, 11}, {6, 13}});Initializer lists are constant expressions and cannot be modified. However, their size and values may be retrieved using their size, begin, and endmembers as follows: void values(initializer_list<int> iniValues) { cout << "Initializer list having " << iniValues.size() << "values\n"; for ( initializer_list<int>::const_iterator begin = iniValues.begin(); begin != iniValues.end(); ++begin ) cout << "Value: " << *begin << '\n'; } Initializer lists can also be used to initialize objects of classes (cf. section 7.5). autocan be used to simplify type definitions of variables and return types of functions if the compiler is able to determine the proper types of such variables or functions. In additon, the use of auto as a storage class specifier is no longer supported by C++: a variable definition like auto int var results in a compilation error. This can be very useful in situations where it is very hard to determine the variable's type in advance. These situations occur, e.g., in the context of templates, topics covered in chapters 18 until 23. At this point in the Annotations only simple examples can be given. Also, some hints will be provided about more general uses of the auto keyword. When defining and initializing a variable int variable = 5 the type of the initializing expression is well known: it's an int, and unless the programmer's intentions are different this could be used to define variable's type (although it shouldn't in normal circumstances as it reduces rather than improves the clarity of the code): auto variable = 5; Here are some examples where using auto is useful. In chapter 5 the iterator concept is introduced (see also chapters 12 and 18). Iterators sometimes have long type definitions, like std::vector<std::string>::const_reverse_iteratorFunctions may return types like this. Since the compiler knows the types returned by functions we may exploit this knowledge using auto. Assuming that a function begin()is declared as follows: std::vector<std::string>::const_reverse_iterator begin();Rather than writing the verbose variable definition (at // 1) a much shorter definition (at // 2) may be used: std::vector<std::string>::const_reverse_iterator iter = begin(); // 1 auto iter = begin(); // 2It's easy to define additional variables of this type. When initializing those variables using iterthe autokeyword can be used again: auto start = iter; Merely using auto always results in a non-reference type. If auto should refer to a reference type, auto && should be used. If start can't be initialized immediately using an existing variable the type of a well known variable or function can be used in combination with the decltype keyword, as in: decltype(iter) start; decltype(begin()) spare;The keyword decltypemay also receive an expression as its argument. E.g., decltype(3 + 5)represents an int, decltype(3 / double(3))represents double. Different from auto the type deduced by decltype may either be a value or a reference type, depending on the kind of expression that is passed to decltype. E.g, if int intVal and int &&intTmp() are available, then decltype(intVal) iv(3); // iv is an int declType( (intVal) ) iref(intVal); // iref is an int & declType(intTmp()) tmpRef(f()); // tmpRef is an int && In addition to this, decltype(auto) specifications, in which case decltype's rules are applied to auto are supported since the C++14 standard. E.g., decltype(auto) iref2((intVal)); // iref2 is an int & auto iref3((intVal)); // iref3 is an int The auto keyword can also be used to postpone the definition of a function's return type. The declaration of a function intArrPtr returning a pointer to an array of 10 ints looks like this: int (*intArrPtr())[10];Such a declaration is fairly complex. E.g., among other complexities it requires `protection of the pointer' using parentheses in combination with the function's parameter list. In situations like these the specification of the return type can be postponed using the autoreturn type, followed by the specification of the function's return type after any other specification the function might receive (e.g., as a const member (cf. section 7.7) or following its noexceptspecification (cf. section 23.7)). Using auto to declare the above function, the declaration becomes: auto intArrPtr() -> int (*)[10];A return type specification using autois called a late-specified return type. The auto keyword can also be used to defined types that are related to the actual auto associated type. Here are some examples: vector<int> vi; auto iter = vi.begin(); // standard: auto is vector<int>::iterator auto &&rref = vi.begin(); // auto is rvalue ref. to the iterator type auto *ptr = &iter; // auto is pointer to the iterator type auto *ptr = &rref; // same Since the C++14 standard late return type specifications are no longer required for functions returning auto. Such functions can now simply be declared like this: auto autoReturnFunction();In this case some restrictions apply, both to the function definitions and the function declarations: autocannot be used before the compiler has seen their definitions. So they cannot be used after mere declarations; autoare implemented as recursive function then at least one return statement must have been seen before the recursive call. E.g., auto fibonacci(size_t n) { if (n <= 1) return n; return fibonacci(n - 1) + fibonacci(n - 2); } typedefis commonly used to define shorthand notations for complex types. Assume we want to define a shorthand for `a pointer to a function expecting a double and an int, and returning an unsigned long long int'. Such a function could be: unsigned long long int compute(double, int);A pointer to such a function has the following form: unsigned long long int (*pf)(double, int);If this kind of pointer is frequently used, consider defining it using typedef: simply put typedefin front of it and the pointer's name is turned into the name of a type. It could be capitalized to let it stand out more clearly as the name of a type: typedef unsigned long long int (*PF)(double, int);After having defined this type, it can be used to declare or define such pointers: PF pf = compute; // initialize the pointer to a function like // 'compute' void fun(PF pf); // fun expects a pointer to a function like // 'compute'However, including the pointer in the typedef might not be a very good idea, as it masks the fact that pfis a pointer. After all, PF pflooks more like ` int x' than ` int *x'. To document that pfis in fact a pointer, slightly change the typedef: typedef unsigned long long int FUN(double, int); FUN *pf = compute; // now pf clearly is a pointer.The scope of typedefs is restricted to compilation units. Therefore, typedefs are usually embedded in header files which are then included by multiple source files in which the typedefs should be used. In addition to typedef C++ offers the using keyword to associate a type and and identifier. In practice typedef and using can be used interchangeably. The using keyword arguably result in more readable type definitions. Consider the following three (equivalent) definitions: typedef unsigned long long int FUN(double, int); usingto improve the visibility (for humans) of the type name, by moving the type name to the front of the definition: using FUN = unsigned long long int (double, int); using FUN = auto (double, int) -> unsigned long long int; for (init; cond; inc) statementOften the initialization, condition, and increment parts are fairly obvious, as in situations where all elements of an array or vector must be processed. Many languages offer the foreachstatement for that and C++ offers the std::for_eachgeneric algorithm (cf. section 19.1.18). In addition to the traditional syntax C++ adds new syntax for the for-statement: the range-based for-loop. This new syntax can be used to process all element of a range in turn. Three types of ranges are distinguished: int array[10]); begin()and end()functions returning so-called iterators (cf. section 18.2). // assume int array[30] for (auto &element: array) statementThe part to the left of the colon is called the for range declaration. The declared variable ( element) is a formal name; use any identifier you like. The variable is only available within the nested statement, and it refers to (or is a copy of) each of the elements of the range, from the first element up to the last. There's no formal requirement to use auto, but using auto is extremely useful in many situations. Not only in situations where the range refers to elements of some complex type, but also in situations where you know what you can do with the elements in the range, but don't care about their exact type names. In the above example int could also have been used. The reference symbol ( &) is important in the following cases: structs (or classes, cf. chapter 7) BigStructelements: struct BigStruct { double array[100]; int last; };Inefficient, because you don't need to make copies of the array's elements. Instead, use refences to elements: BigStruct data[100]; // assume properly initialized elsewhere int countUsed() { int sum = 0; // const &: the elements aren't modified for (auto const &element: data) sum += element.last; return sum; }If datais only available as a pointer to its first element in combination with the number of elements, range-based for loops can also be used, but require a little help. Section 24.5 describes a generic approach to using range based for loops in such cases. \n, \\and \". In some cases it is useful to avoid escaping strings (e.g., in the context of XML). To this end, C++ offers raw string literals. Raw string literals start with an R, followed by a double quote, followed by a label (which is an arbitrary sequence of characters not equal to (), followed by (. The raw string ends at the closing parenthesis ), followed by the label which is in turn followed by a double quote. Example: R"(A Raw \ "String")" R"delimiter(Another \ Raw "(String))delimiter"In the first case, everything between "(and )"is part of the string. Escape sequences aren't supported so the text \ "within the first raw string literal defines three characters: a backslash, a blank character and a double quote. The second example shows a raw string defined between the markers "delimiter(and )delimiter". 0b101can also be used. Formally binary constants are supported by C++ since the C++14 standard, but compilers usually supported this well before implementing this standard. The binary constants come in handy in the context of, e.g., bit-flags, as it immediately shows which bit-fields are set, while other notations are less informative. forrepetition statements start with an optional initialization clause. The initialization clause allows us to localize variables to the scope of the for statements. C++17 extends this concept to selection statements. The language already allowed us to define and initialize a variable in the condition clauses of if and switch statements, but starting with C++17 the definition and assignment can be separated, thus supporting selection statements with initializer clauses. Consider the situation where an action should be performed if the next line read from the standard input stream equals go!. When used inside a function, while intending to localize the string to contain the contents of the next line as much as possible, constructions like the following had to be used: void function() { // ... any set of statements { string line; // localize line if (getline(cin, line)) action(); } // ... any set of statements }C++17 adds an optional init;clause to ifand whilestatements (note that the semicolon is optional too, which is different from the optional init(no semicolon) clause in forstatements). This allows us to rephrase the above example as: void function() { // ... any set of statements if (string line; getline(cin, line)) action(); // ... any set of statements } Like the if-statement the switch-statement also supports an optional init; clause. Assume a program processes commands, entered as lines on the standard input, and a function convert is available converting the command to an enumeration value. Applying the init; clause to a switch, all commands may then be processed like this: void process() { while (true) { switch (string cmd; int select = convert(getline(cin, cmd))) { case CMD1: ... break; case CMD2: ... break; ... } } }Note that a variable may still be defined in the actual condition clauses. This is true for both the extended ifand switchstatement. But before using the condition clauses an initialization clause may be used to define additional variables (plural, as it may contain a comma-separated list of variables, similar to the syntax that's available for for-statements). The C++ standard defines the following attributes: [[noreturn]]: [[noreturn]]indicates that the function does not return. [[noreturn]]'sbehavior is undefined if the function declared with this attribute actually returns. The following standard functions have this attribute: std::_Exit, std::abort, std::exit, std::quick_exit, std::unexpected, std::terminate, std::rethrow_exception, std::throw_with_nested, std::nested_exception::rethrow_nested, Here is an example of a function declaration and definition using the [[noreturn]]attribute:[[noreturn]] void doesntReturn(); [[noreturn]] void doesntReturn() { exit(0); } [[carries_dependency]]: This attribute is currently not yet covered by the C++ Annotations. At this point in the C++ Annotations it can safely be ignored. [[deprecated]]: This attribute (and its alternative form [[deprecated("reason")]]) is available since the C++14 standard. It indicates that the use of the name or entity declared with this attribute is allowed, but discouraged for some reason. This attribute can be used for classes, typedef-names, variables, non-static data members, functions, enumerations, and template specializations. An existing non-deprecated entity may be redeclared deprecated, but once an entity has been declared deprecated it cannot be redeclared as `undeprecated'. When encountering the [[deprecated]]attribute the compiler generates a warning, e.g.,demo.cc:12:24: warning: 'void deprecatedFunction()' is deprecated [-Wdeprecated-declarations] deprecatedFunction(); demo.cc:5:21: note: declared here [[deprecated]] void deprecatedFunction() When using the alternative form (e.g., [[deprecated("do not use")]] void fun()) the compiler generates a warning showing the text between the double quotes, e.g.,demo.cc:12:24: warning: 'void deprecatedFunction()' is deprecated: do not use [-Wdeprecated-declarations] deprecatedFunction(); demo.cc:5:38: note: declared here [[deprecated("do not use")]] void deprecatedFunction() void, char, short, int, long, floatand double. C++ extends these built-in types with several additional built-in types: the types bool, wchar_t, long longand long double(Cf. ANSI/ISO draft (1995), par. 27.6.2.4.1 for examples of these very long types). The type long longis merely a double-long longdatatype. The type long doubleis merely a double-long doubledatatype. These built-in types as well as pointer variables are called primitive types in the C++ Annotations. There is a subtle issue to be aware of when converting applications developed for 32-bit architectures to 64-bit architectures. When converting 32-bit programs to 64-bit programs, only long types and pointer types change in size from 32 bits to 64 bits; integers of type int remain at their size of 32 bits. This may cause data truncation when assigning pointer or long types to int types. Also, problems with sign extension can occur when assigning expressions using types shorter than the size of an int to an unsigned long or to a pointer. More information about this issue can be found here. Except for these built-in types the class-type string is available for handling character strings. The datatypes bool, and wchar_t are covered in the following sections, the datatype string is covered in chapter 5. Note that recent versions of C may also have adopted some of these newer data types (notably bool and wchar_t). Traditionally, however, C doesn't support them, hence they are mentioned here. Now that these new types are introduced, let's refresh your memory about letters that can be used in literal constants of various types. They are: bor B: in addition to its use to indicate a hexadecimal value, it can also be used to define a binary constant. E.g., 0b101equals the decimal value 5. The 0bprefix can be used to specify binary constants starting with the C++14 standard. Eor e: the exponentiation character in floating point literal values. For example: 1.23E+3. Here, Eshould be pronounced (and interpreted) as: times 10 to the power. Therefore, 1.23E+3represents the value 1230. Fcan be used as postfix to a non-integral numeric constant to indicate a value of type float, rather than double, which is the default. For example: 12.F(the dot transforms 12 into a floating point value); 1.23E+3F(see the previous example. 1.23E+3is a doublevalue, whereas 1.23E+3Fis a floatvalue). Lcan be used as prefix to indicate a character string whose elements are wchar_t-type characters. For example: L"hello world". Lcan be used as postfix to an integral value to indicate a value of type long, rather than int, which is the default. Note that there is no letter indicating a shorttype. For that a static_cast<short>()must be used. p, to specify the power in hexadecimal floating point numbers. E.g. 0x10p4. The exponent itself is read as a decimal constant and can therefore not start with 0x. The exponent part is interpreted as a power of 2. So 0x10p2is (decimal) equal to 64: 16 * 2^2. Ucan be used as postfix to an integral value to indicate an unsignedvalue, rather than an int. It may also be combined with the postfix Lto produce an unsigned long intvalue. xand auntil fcharacters can be used to specify hexadecimal constants (optionally using capital letters). boolrepresents boolean (logical) values, for which the (now reserved) constants trueand falsemay be used. Except for these reserved values, integral values may also be assigned to variables of type bool, which are then implicitly converted to trueand falseaccording to the following conversion rules (assume intValueis an int-variable, and boolValueis a bool-variable): // from int to bool: boolValue = intValue ? true : false; // from bool to int: intValue = boolValue ? 1 : 0;Furthermore, when boolvalues are inserted into streams then trueis represented by 1, and falseis represented by 0. Consider the following example: cout << "A true value: " << true << "\n" "A false value: " << false << '\n';The booldata type is found in other programming languages as well. Pascal has its type Boolean; Java has a booleantype. Different from these languages, C++'s type boolacts like a kind of inttype. It is primarily a documentation-improving type, having just two values trueand false. Actually, these values can be interpreted as enumvalues for 1and 0. Doing so would ignore the philosophy behind the booldata type, but nevertheless: assigning trueto an intvariable neither produces warnings nor errors. Using the bool-type is usually clearer than using int. Consider the following prototypes: bool exists(char const *fileName); // (1) int exists(char const *fileName); // (2)With the first prototype, readers expect the function to return trueif the given filename is the name of an existing file. However, with the second prototype some ambiguity arises: intuitively the return value 1 is appealing, as it allows constructions like if (exists("myfile")) cout << "myfile exists";On the other hand, many system functions (like access, stat, and many other) return 0 to indicate a successful operation, reserving other values to indicate various types of errors. As a rule of thumb I suggest the following: if a function should inform its caller about the success or failure of its task, let the function return a bool value. If the function should return success or various types of errors, let the function return enum values, documenting the situation by its various symbolic constants. Only when the function returns a conceptually meaningful integral value (like the sum of two int values), let the function return an int value. wchar_ttype is an extension of the charbuilt-in type, to accomodate wide character values (but see also the next section). The g++compiler reports sizeof(wchar_t)as 4, which easily accomodates all 65,536 different Unicode character values. Note that Java's char data type is somewhat comparable to C++'s wchar_t type. Java's char type is 2 bytes wide, though. On the other hand, Java's byte data type is comparable to C++'s char type: one byte. Confusing? L(e.g., L"hello") defines a wchar_tstring literal. C++ also supports 8, 16 and 32 bit Unicode encoded strings. Furthermore, two new data types are introduced: char16_t and char32_t storing, respectively, a UTF-16 and UTF-32 unicode value. In addition, a char type variable is large enough to contain any UTF-8 unicode value as well (i.e., it remains an 8-bit value). String literals for the various types of unicode encodings (and associated variables) can be defined as follows: char utf_8[] = u8"This is UTF-8 encoded."; char16_t utf16[] = u"This is UTF-16 encoded."; char32_t utf32[] = U"This is UTF-32 encoded.";Alternatively, unicode constants may be defined using the \uescape sequence, followed by a hexadecimal value. Depending on the type of the unicode variable (or constant) a UTF-8, UTF-16or UTF-32value is used. E.g., char utf_8[] = u8"\u2018"; char16_t utf16[] = u"\u2018"; char32_t utf32[] = U"\u2018";Unicode strings can be delimited by double quotes but raw string literals can also be used. long long int. On 32 bit systems it has at least 64 usable bits. size_ttype is not really a built-in primitive data type, but a data type that is promoted by POSIX as a typename to be used for non-negative integral values answering questions like `how much' and `how many', in which case it should be used instead of unsigned int. It is not a specific C++ type, but also available in, e.g., C. Usually it is defined implictly when a (any) system header file is included. The header file `officially' defining size_tin the context of C++ is cstddef. Using size_t has the advantage of being a conceptual type, rather than a standard type that is then modified by a modifier. Thus, it improves the self-documenting value of source code. Sometimes functions explictly require unsigned int to be used. E.g., on amd-architectures the X-windows function XQueryPointer explicitly requires a pointer to an unsigned int variable as one of its arguments. In such situations a pointer to a size_t variable can't be used, but the address of an unsigned int must be provided. Such situations are exceptional, though. Other useful bit-represented types also exists. E.g., uint32_t is guaranteed to hold 32-bits unsigned values. Analogously, int32_t holds 32-bits signed values. Corresponding types exist for 8, 16 and 64 bits values. These types are defined in the header file cstdint and can be very useful when you need to specify or use integral value types of fixed sizes. 1'000'000 3.141'592'653'589'793'238'5 ''123 // won't compile 1''23 // won't compile either (typename)expressionhere typenameis the name of a valid type, and expressionis an expression. C style casts are now deprecated. C++ programs should merely use the new style C++ casts as they offer the compiler facilities to verify the sensibility of the cast. Facilities which are not offered by the classic C-style cast. A cast should not be confused with the often used constructor notation: typename(expression)the constructor notation is not a cast, but a request to the compiler to construct an (anonymous) variable of type typenamefrom expression. If casts are really necessary one of several new-style casts should be used. These new-style casts are introduced in the upcoming sections. static_cast<type>(expression)is used to convert `conceptually comparable or related types' to each other. Here as well as in other C++ style casts typeis the type to which the type of expressionshould be cast. Here are some examples of situations where the static_cast can (or should) be used: intto a double. This happens, for example when the quotient of two int values must be computed without losing the fraction part of the division. The sqrt function called in the following fragment returns 2: int x = 19; int y = 4; sqrt(x / y);whereas it returns 2.179 when a static_castis used, as in: sqrt(static_cast<double>(x) / y);The important point to notice here is that a static_castis allowed to change the representation of its expressioninto the representation that's used by the destination type. Also note that the division is put outside of the cast expression. If the division is performed within the cast's expression (as in static_cast<double>(x / y)) an integer division has already been performed before the cast has had a chance to convert the type of an operand to double. enumvalues to intvalues (in any direction). Here the two types use identical representations, but different semantics. Assigning an ordinary enum value to an int doesn't require a cast, but when the enum is a strongly typed enum a cast is required. Conversely, a static_cast is required when assigning an int value to a variable of some enum type. Here is an example: enum class Enum { VALUE }; cout << static_cast<int>(Enum::VALUE); // show the numeric value The static_cast is used in the context of class inheritance (cf. chapter 13) to convert a pointer to a so-called `derived class' to a pointer to its `base class'. It cannot be used for casting unrelated types to each other (e.g., a static_cast cannot be used to cast a pointer to a short to a pointer to an int). A void * is a generic pointer. It is frequently used by functions in the C library (e.g., memcpy(3)). Since it is the generic pointer it is related to any other pointer, and a static_cast should be used to convert a void * to an intended destination pointer. This is a somewhat awkward left-over from C, which should probably only be used in that context. Here is an example: The qsort function from the C library expects a pointer to a (comparison) function having two void const * parameters. In fact, these parameters point to data elements of the array to be sorted, and so the comparison function must cast the void const * parameters to pointers to the elements of the array to be sorted. So, if the array is an int array[] and the compare function's parameters are void const *p1 and void const *p2 then the compare function obtains the address of the int pointed to by p1 by using: static_cast<int const *>(p1); int-typed variable (remember that a static_castis allowed to change the expression's representation!). Here is an example: the C function tolower requires an int representing the value of an unsigned char. But char by default is a signed type. To call tolower using an available char ch we should use: tolower(static_cast<unsigned char>(ch)) constkeyword has been given a special place in casting. Normally anything constis constfor a good reason. Nonetheless situations may be encountered where the constcan be ignored. For these special situations the const_castshould be used. Its syntax is: const_cast<type>(expression)A const_cast<type>(expression)expression is used to undo the constattribute of a (pointer) type. The need for a const_cast may occur in combination with functions from the standard C library which traditionally weren't always as const-aware as they should. A function strfun(char *s) might be available, performing some operation on its char *s parameter without actually modifying the characters pointed to by s. Passing char const hello[] = "hello"; to strfun produces the warning passing `const char *' as argument 1 of `fun(char *)' discards constA const_castis the appropriate way to prevent the warning: strfun(const_cast<char *>(hello)); reinterpret_cast. It is somewhat reminiscent of the static_cast, but reinterpret_castshould only be used when it is known that the information as defined in fact is or can be interpreted as something completely different. Its syntax is: reinterpret_cast<pointer type>(pointer expression) Think of the reinterpret_cast as a cast offering a poor-man's union: the same memory location may be interpreted in completely different ways. The reinterpret_cast is used, for example, in combination with the write function that is available for streams. In C++ streams are the preferred interface to, e.g., disk-files. The standard streams like std::cin and std::cout also are stream objects. Streams intended for writing (`output streams' like cout) offer write members having the prototype write(char const *buffer, int length)To write the value stored within a doublevariable to a stream in its un-interpreted binary form the stream's writemember is used. However, as a double *and a char *point to variables using different and unrelated representations, a static_castcannot be used. In this case a reinterpret_castis required. To write the raw bytes of a variable double valueto coutwe use: cout.write(reinterpret_cast<char const *>(&value), sizeof(double)); All casts are potentially dangerous, but the reinterpret_cast is the most dangerous of them all. Effectively we tell the compiler: back off, we know what we're doing, so stop fuzzing. All bets are off, and we'd better do know what we're doing in situations like these. As a case in point consider the following code: int value = 0x12345678; // assume a 32-bits int cout << "Value's first byte has value: " << hex << static_cast<int>( *reinterpret_cast<unsigned char *>(&value) );The above code produces different results on little and big endian computers. Little endian computers show the value 78, big endian computers the value 12. Also note that the different representations used by little and big endian computers renders the previous example ( cout.write(...)) non-portable over computers of different architectures. As a rule of thumb: if circumstances arise in which casts have to be used, clearly document the reasons for their use in your code, making double sure that the cast does not eventually cause a program to misbehave. Also: avoid reinterpret_casts unless you have to use them. dynamic_cast<type>(expression)Different from the static_cast, whose actions are completely determined compile-time, the dynamic_cast's actions are determined run-time to convert a pointer to an object of some class (e.g., Base) to a pointer to an object of another class (e.g., Derived) which is found further down its so-called class hierarchy (this is also called downcasting). At this point in the Annotations a dynamic_cast cannot yet be discussed extensively, but we return to this topic in section 14.6.1. In the context of the class shared_ptr, which is covered in section 18.4, several more new-style casts are availble. Actual coverage of these specialized casts is postponed until section 18.4.5. These specialized casts are: static_pointer_cast, returning a shared_ptrto the base-class section of a derived class object; const_pointer_cast, returing a shared_ptrto a non-const object from a shared_ptrto a constant object; dynamic_pointer_cast, returning a shared_ptrto a derived class object from a shared_ptrto a base class object. alignas char32_t enum namespace return typedef alignof class explicit new short typeid and compl export noexcept signed typename and_eq concept extern not sizeof union asm const false not_eq static unsigned auto const_cast float nullptr static_assert using axiom constexpr for operator static_cast virtual bitand continue friend or struct void bitor decltype goto or_eq switch volatile bool default if private template wchar_t break delete import protected this while case do inline public thread_local xor catch double int register throw xor_eq char dynamic_cast long reinterpret_cast true char16_t else mutable requires try Notes: exportkeyword is no longer actively used by C++, but it is kept as a keyword, reserved for future use. and, and_eq, bitand, bitor, compl, not, not_eq, or, or_eq, xorand xor_eqare symbolic alternatives for, respectively, &&, &=, &, |, ~, !, !=, ||, |=, ^and ^=. finaland override. These identifiers are special in the sense that they acquire special meanings when declaring classes or polymorphic functions. Section 14.4 provides further details. Keywords can only be used for their intended purpose and cannot be used as names for other entities (e.g., variables, functions, class-names, etc.). In addition to keywords identifiers starting with an underscore and living in the global namespace (i.e., not using any explicit namespace or using the mere :: namespace specification) or living in the std namespace are reserved identifiers in the sense that their use is a prerogative of the implementor.
http://ftp.icce.rug.nl/documents/cplusplus/cplusplus03.html
CC-MAIN-2017-22
refinedweb
10,601
52.09
void setup(){ Tlc.init(); Tlc.resetTimers();}void loop(){ aLast = a; //our previous intensity a = random(0, 4095); //grab our new intensity fadeDuration = random(500, 1500); //duration of fade between .5 and 1.5 seconds fadeDelay = random(5000, 15000); //delay before fading between 5 and 15 seconds Tlc.newFade(3, fadeDuration, aLast, a, fadeDelay); while(Tlc.updateFades());} (unsigned long) fadeDelay = millis() + random(5000, 15000); nphillips:Code: [Select](unsigned long) fadeDelay = millis() + random(5000, 15000);The last parameter of Tlc.newFade is the start time in millis. v004 has proper commenting in the examples (sorry)! Also, if you need to run more than 5 fades at once (if you call newFade more than 5 times before while(Tlc.updateFades()), you need to add more parameters to Tlc.init. For say 10 fades at once, use Tlc.init(1 tlc, 10 fades). Tlc.newFade(...) will return false if you have too many fades running at once. Is there an upper bound to this? Rather, will it break if I try to have it run a boat-load of fades? (I may end up looking at 32+ LEDs for one project....) // create a bunch of fadesTlc.newFade(...);Tlc.newFade(...);Tlc.newFade(...);...while (Tlc.updateFades()) { if(Tlc.newFade(...)) { // added successfully! } else { // too many fades right now }} For people who want to use the TLC5940 to drive motors, use the old library from the first post for now. The PWM period for the TLC5940LED library is fixed at 488Hz, which might be too fast for driving a DC motor. Just as a note, the TLC should act as GND for whatever you're using with it (voltage should flow from +5V, through whatever, and into the TLC pin). while(updateFades()){ if(getGreyscaleValue(3) > 2048) { newFade(4, 1000, 1024, 4095); }} Tlc.set(fp->channel, fp->endValue - ((((int32_t)(fp->stopMillis - currentMillis)) * fp->dValue) / fp->duration)); int TLC5940LED::getGreyscaleValue(uint8_t channel){ struct fade *fp = channel; return (fp->endValue - ((((int32_t)(fp->stopMillis - currentMillis)) * fp->dValue) / fp->duration));} int value = Tlc.get(1); j = Tlc.get(ledPins[5]); o: In function `loop':undefined reference to `TLC5940LED::get(unsigned char)' Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=37963.msg280929
CC-MAIN-2015-06
refinedweb
383
58.89
Download presentation Presentation is loading. Please wait. Published byBrock Bollard Modified over 2 years ago 1 Computer Organization CS345 David Monismith Based upon notes by Dr. Bill Siever and notes from the Patternson and Hennessy Text 2 Last Time Basic binary mathematics R format and I format instructions 3 Recall What if we want to add two binary numbers? Take decimal numbers 5 and 3 and add them in binary. – 101b is 5 decimal – 11b is 3 decimal 101 + 11 ----- 1000 Notice that there is a carry over bit. This can cause a problem called overflow. Notice that ifwe only have 3 bits to represent our numbers, we would have a result of zero. 4 Recall We also have to be careful with negative numbers. We could use a one or a zero to represent the sign bit. Let's assume we have 5 bits to workwith, and we try to add a positive and negative binary number. 10011b is -3 decimal 00101b is 5 decimal 10011 + 00101 ----------- 11000 <---- result is -8 (wrong) Our result should be 2 decimal. Notice that we need a different representation if we are going to directly add positive and negative numbers. In a few days we will look at a different representation of signed integers called 2'scomplement representation. 5 MIPS Instructions MIPS instructions use 32 bits and come in 3 basic formats. R format instructions – These are register format instructions. That is, they use only registers. R-format instructions use the following format: opcodersrtrdshamtfunc 6 bits 5 bits 5 bits 5 bits5 bits6 bits Notice: 32 = 6+5+5+5+5+6 The opcode represents the instruction type. The func or function code allows for specification of a variant of theoperation in the opcode (e.g. a math or ALU function) – rs - the first source register – rt - the second source register – rd - the destination register – shamt - the shift amount (used for bit shifting) 6 I Instructions I format instructions These are immediate format instructions. These instructions utilize a 16-bit integer value. I-format instructions use the following format: opcodersrtimmediate 6 bits5 bits5 bits16 bits – opcode - instruction type – rt - often the destination register – rs - often the source register – immediate - a 16 bit integer value 7 Various Instruction Formats So far we have seen many basic programming elements within the MIPS instruction set. We have seen the following: Arrays Input/Output operations Mathematical operations Notice that many of these use R or I format instructions. For example, –add $t1, $t2, $t3 –sub $t1, $t2, $t3 –mul $t1, $t2, $t3 –div $t1, $t2, $t3 Are all R-format instructions 8 Various Instruction Formats addi $s1, $s2, 100 lw $s2, -8($s1) sw $t3, 102($s6) Are all I format instructions. Similarly, there are floating point instructions with the same format. Recall from Java that there are both single and double precision variables. 9 Floating Point Registers MIPS floating point registers are provided in Coprocessor 1. There are 32, 32-bit, single precision floating point registers. These registers are provided in the following format: $f0, $f1, $f2,...$f31 Pairs of single precision registers are used to represent double precision (64 bit) floating point numbers. There are 16 double precision floating point registers. These are represented using pairs of floating point registers starting with an even numbered register. E.g. the pair $f0 and $f1 is represented using $f0. The list of registers is provided below. $f0, $f2, $f4....$f30 10 Floating Pt. Operations Floating point operations instructions have almost exactly the same name as their integer counterparts. Often these operations are in the format instruction_name.d for double precision operations or instruction_name.s for single precision operations. Examples of such operations follow below. add.s $f1, $f2, $f3 add.d $f2, $f4, $f6 sub.s $f1, $f2, $f3 sub.d $f2, $f4, $f6 mul.s $f1, $f2, $f3 mul.d $f2, $f4, $f6 div.s $f1, $f2, $f3 div.d $f2, $f4, $f6 mov.s $f2, $f3 mov.d $f2, $f4 # move contents of one # double register to another 11 Example Examples of array allocation for double or single precision values are provided below. –myFloatArray:.single 1.1, 2.2, 3.3 –myDoubleArray:.double 4.4, 5.5, 6.6, 7.7 Space may be allocated for arrays of fixed size as follows: –myFloatArray:.single 0:200 –myDoubleArray:.double 0:100 Where size and initialization value are provided in the format init_value:size. 12 MIPS Operations To load and store values, the pseudo-instructions l.d, l.s, s.d, and s.s may be used to load and store double and single precision floating point numbers. We will investigate their use in the next lecture. But we are missing something. We've seen I/O, arrays, and mathematical operations. We have not seen details on logical, comparison, or branching operations yet. 13 Branching Operations There are two basic branching operations: – beq - branch if equal – bne - branch if not equal These instructions are provided in the following format: –beq $t1, $t2, label –bne $t1, $t2, label These compare whether $t1 and $t2 are equal or not. That is they look for eithera true or a false value. Provided a comparison operator, these are actually all we need to accomplish loops and if-then-else statements. There are other operators, but we'll only investigate these for now. 14 Branching Examples Take a if statement as an example: if($t1 == $t2) { //do something } We can represent this if statement in MIPS as follows: bne $t1, $t2, endif # do something endif: Notice that we branch to label " endif: " if $t1 != $t2. That is, we ignore the segment of code where we "do something". 15 Example Similarly for a loop we can use a label, What if we have the following loop: do { //do loop operations } while (t1 == t2); Where the loop continues so long as t1 == t2. This code may be represented in MIPS as follows: loopStart: #do loop operations beq $t1, $t2, loopStart Notice that if t1 == t2, we return to the beginning of the loop. Otherwise, we drop past the end of the loop and continue processing from that point. 16 Comparison Operations We would like to compare values using operators such as >, =, or <=. We actually only need one of these operators to perform comparisons. This is the< operator. This is provided by the "slt" instruction. There are also pseudo-instructions like the "sle" instruction that provide functionality for other comparisons like <= without the need to modify the operands. The slt and sle instructions allow us to set a register to true or false depending upon the result of comparing two registers. 17 Comparison Operations These instructions use the following format: slt $t1, $t2, $t3 sle $t1, $t2, $t3 Where t1 is set to true if t2 < t3 or t2 <= t3, respectively. Given a true or false (i.e. a one or zero) result from one of these instructions,we can use the result in a branching operation. For example, the code below implements an if statement that does work if t2 < t3 slt $t1, $t2, $t3 beq $t1, $zero, myLabel # do work myLabel: We will look at more floating point, branching, and comparison operations next time. Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/3390175/
CC-MAIN-2017-22
refinedweb
1,207
56.96
5 replies on 1 page. Most recent reply: Apr 16, 2012 10:29 AM by Gardner Pomper Here is the previous article, showing how to use CoffeeScript and jQuery. I chose Web.py because it's the simplest and most straightforward web framework I've found for Python. Despite that, you can do quite a bit with it. If you want to get fancy, you'd probably choose Django since it seems to have become the leading web-framework contender for Python. But for quick-and-simple, which is what I wanted for this example, it's hard to beat Web.py. The intent of this example is to utilize high-level tools to create a very simple web app. The app will live on a single page that the Web.py server will hand you when you go to the URL. Every two seconds, this page will create a JSON object consisting of data from a text field on the page, and a randomly-generated number. The JSON object will be sent to the server as a POST request, and the server will respond by modifying the data and returning it to the page as a new JSON object. The page will take the data and update itself. Thus it makes the web page a reasonably-complete user interface for the application (this app will even automatically start up the web page, so it could be used as a desktop application). The web page is very minimal, with a text field for input and an H1 field to be manipulated by jQuery to display the output. The page lives in the templates subdirectory and is named displaydata.html which you'll shortly see is important: <!doctype html> <html> <head> <title>Webpy, CoffeeScript and jQuery Ajax</title> <link rel="shortcut icon" href="/static/favicon.ico" type="image/x-icon"> <script type="text/javascript" src="/static/jquery.js"></script> <script type="text/javascript" src="/static/getdata.js"></script> </head> <body> <center> <input class="text" id="mod" size="10" style="font-size:2em"/> <h1 id="data"></h1> </center> </body> </html> Note that we include getdata.js here, which will be produced later via CoffeeScript. (Also, this is the first time I've -- finally! -- figured out how to set up the icon, thanks to help from someone on the Web.py newsgroup. I don't know why that one is so obscure). I'll describe the Web.py application in pieces, and then give you the whole file at the end. After the imports (note that Web.py just requires a single file), the urls describes a mapping between URLs and classes to handle those URLs. Here, we just have one: import web, json, webbrowser, multiprocessing urls = ( # URLs mapped to handler classes '/', 'home', ) Next, we tell Web.py where the templates live in the file system. In this example, I am not doing any actual rendering, but Web.py has a nice templating system for substituting when you call a template, and you can insert Python code that gets executed as the template is rendered. This can be quite powerful, and it's nice because it's just Python so you don't have to learn yet another language for templating: render = web.template.render('templates/') Here's the class that handles requests to the root URL as mapped in urls. As you might expect, the GET method handles HTTP GET requests at that address, and the POST method handles the POST requests: class home: def GET(self): # Show displaydata.html return render.displaydata() def POST(self): input = web.input() web.header('Content-Type', 'application/json') return json.dumps({ # Do trivial operations: 'txt' : input.mod.lower(), 'dat' : "%.3f" % float(input.num) }) The call to render.displaydata() goes to the templates subdirectory and finds displaydata.html, then passes it through the rendering engine (if we had given arguments, those arguments would be evaluated during rendering). In our case, we have nothing but plain HTML so no replacement occurs during rendering. Thus, all it does is return the previously-shown HTML to the browser. The POST method is called by the page, as you'll see later. The first thing it does is unpack the data from the query by calling web.input(), and this produces the input object which contains the fields created by the web page when it makes its Ajax call. These fields are mod and lower so they can be easily accessed by saying input.mod and input.lower. In order to return a JSON object, we must set the header on the web page to indicate this. To format the JSON object, we use Python's json.dumps() function, and hand it a structure: the txt field takes input.mod and lowercases the string, while the dat field shortens input.num to be three places beyond the decimal point. The goal here is just to do something simple to show that the server is doing a little work and returning the result. Here's the main to start everything up: if __name__ == '__main__': app = web.application(urls, globals()) multiprocessing.Process(target=app.run).start() webbrowser.open_new_tab("") The first line is the standard way to start a Web.py application. In the second line, I use Python's multiprocessing library to run the server in the background, which then allows me to automatically open a browser window pointing at the URL via Python's webbrowser library. This way, you can create an application that the user can start up by double-clicking an icon, and the UI will appear as a page in the user's browser. (As an alternative, some browsers allow invocation from the OS specifically to act as UIs for local apps. For example, you can launch Google chrome in app mode by invoking it with a switch at command line specifying the address of the said app, e.g.: chrome --app= This produces a chrome instance without an address bar displaying your app, living in a different space than the default chrome browser). Here's the whole Python file, which I called server.py: import web, json, webbrowser, multiprocessing urls = ( # URLs mapped to handler classes '/', 'home', ) # Where the templates live: render = web.template.render('templates/') class home: def GET(self): # Show displaydata.html return render.displaydata() def POST(self): input = web.input() web.header('Content-Type', 'application/json') return json.dumps({ # Do trivial operations: 'txt' : input.mod.lower(), 'dat' : "%.3f" % float(input.num) }) if __name__ == '__main__': app = web.application(urls, globals()) multiprocessing.Process(target=app.run).start() webbrowser.open_new_tab("") There's one more piece, which is the CoffeeScript that produces the getdata.js JavaScript file included in the HTML: getData = -> { mod: $(".text").val(), num: Math.random() * Math.PI * 2 }, (result) -> $("#data").html(result.dat + " " + result.txt).hide().fadeIn(500) setTimeout getData, 2000 # Repeat every 2 seconds $ -> getData() The getData function calls jQuery's Ajax post function. The first argument is the URL to post to (it assumes the base of this URL is the same one that the page originated from, which is required). The second is the JSON object that will be sent as the POST data; you'll remember that the keys for this object (used within home.POST in the Python app) are mod and num. mod gets the value from the component with a class of text, while num is generated with JavaScript's random number generator (note that CoffeeScript can transparently call JavaScript code). The third argument is a callback function which is called when the Ajax request returns. The result argument is the JSON object that is returned by the Python home.POST method, and note that the dat and txt fields are accessed with simple dot notation. The field with id data is modified by inserting our result data formatted as HTML. The hide() and fadeIn() are jQuery calls to fancy it up a little. Finally, setTimeout is a JavaScript function that calls another function after a number of milliseconds. This way, the page keeps refreshing itself every two seconds. The last two lines are the equivalent of a main() method: after the page is loaded and ready, the $ -> block will be executed and that will get everything started. Skip to the end of the article to see the application running in your browser. The above approach works in many situations; periodically polling the server is often all you need to do. Sometimes, however, it's a little too primitive and what you'd like is for the application on the server to be able to drive the UI by pushing data to it whenever it needs to. To address this issue, HTML5 is standardizing Server-Sent Events. This technology is supported now in most browsers, however not IE9 (it may appear in IE10, but you never know with IE). I've posted a question to the Web.py newsgroup asking whether there is support for Server-Sent Events. Server-Sent Events completes the package and makes HTML5 a full UI; I look forward to experimenting with it. Heroku is a cloud application platform. They seem to have the goal of making cloud deployment of your app as easy as possible, and they also seem to be trying to adapt to whatever technology you happen to be using. Just today they announced support for Python. You can find a walkthrough here and there's specific information about using it with Django here. Because I've been preparing for this speaking trip to Europe, I have yet to delve into the details of Heroku. My friend James Ward (who now works there) asked to see the code for this article, and upon receiving it had it set up in less than 30 minutes (even though he is a Python novice and hasn't used Web.py). Here are the steps he used (with the basic Heroku stuff already set up): Create a new file in your src dir named "Procfile" containing: web: python server.py ${PORT} Create a new file in your src dir named "requirements.txt" containing: web.py Create the git repo, add the files to it, and commit them: git init git add . git commit -m init Create the app on Heroku: heroku create -s cedar Deploy the app: git push heroku master It's kind of clever the way they use git as the deployment mechanism. I have to say I'm pretty impressed by how quickly this could be deployed on Heroku. I think it's a tribute to the simplicity of Web.py that there weren't any snags. getdata.js
http://www.artima.com/forums/flat.jsp?forum=106&thread=335549&start=0
CC-MAIN-2013-20
refinedweb
1,766
65.42
First time here? Check out the FAQ! Hi, i am new with openCV. i'm trying to paste a code snippet from old versions. But i get lots of error. VideoWriter parlakVideo("videoParlak.avi", CV_FOURCC(‘M’, ‘P’, ‘4’, ‘2’), 20, boyut, true); VideoWriter karanlikVideo("videoKaranlik.avi", CV_FOURCC(‘M’, ‘P’, ‘4’, ‘2’), 20, boyut, true); here is my try: VideoWriter parlakVideo("video.wmv", VideoWriter::fourcc ((‘M’, ‘P’, ‘4’, ‘2’), 20, boyut, true); VideoWriter karanlikVideo("video.wmv", VideoWriter::fourcc((‘M’, ‘P’, ‘4’, ‘2’), 20, boyut, true); Hi I want to use OpenCV's trackers. I see this error tracker = cv2.Tracker_create(tracker_type) AttributeError: module 'cv2.cv2' has no attribute 'Tracker_create' I uninstalled opencv-python first, and then opencv-contrib-pythn.(I removed both of them completely), and reinstalled opencv-contrib-python (It's version is 4.2.0.34). Why do I see this error? This is my implementation import cv2 import sys (major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.') if __name__ == '__main__' : # Set up tracker. # Instead of MIL, you can also use tracker_types = ['BOOSTING', 'MIL','KCF', 'TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT'] tracker_type = tracker_types[0](0) # cv2.cv2' has no attribute 'Tracker_create error Hi I want to use OpenCV's trackers. I see this error tracker = cv Hi, I want to write a video from a frame file. Frames folder is this: This is my result: import cv2 import numpy as np import glob j=0 img_array = [] for filename in glob.glob('/content/drive/My Drive/M-30-HD/M-30-HD/*.jpg'): #I work on Google Colab img = cv2.imread(filename) height, width, layers = img.shape size = (width,height) print(j) j=j+1 img_array.append(img) if(j==3000): break out = cv2.VideoWriter('m30hd.mp4',cv2.VideoWriter_fourcc(*'MP4V'), 24, size) for i in range(len(img_array)): out.write(img_array[i]) print(i) out.release() I see this error Traceback (most recent call last): File "videoya_cevirme.py", line 27, in <module> frames How to write first N images of video frames Hi, I want to write a video from a frame file. Frames folder is this: This Thanks a lot!! Problem with writing video Hi, I try to crop a polygonal from video and write it. My cropping operation is successful. Sorry, what do you mean by condition block? What do you mean by condition block? Trouble with writing video Hey, I want to create a video from images. Size of images are same. I create video using im It works, thanks But i need to run the video cv2.imshow() doesn't work when I remove cv2.watKey(0) after it Hi, I've been trying to use a deep learning model. I'm t Hi, First I've created a video using import cv2 import numpy as np import glob img_array = [] for filename in glob.glob('C:/Users/mustafa/Downloads/highway/highway/input/*.jpg'): img = cv2.imread(filename) height, width, layers = img.shape size = (width,height) img_array.append(img) out = cv2.VideoWriter('project.mp4',cv2.VideoWriter_fourcc(*'DIVX'), 24, size) for i in range(len(img_array)): out.write(img_array[i]) out.release() As you see, I set FPS of video as 24. Secondly, when I try to use FPS module to measure FPS of video that I've created. from imutils.video import FPS import cv2 import numpy as np cap = cv2.VideoCapture('p.mp4') fps = FPS().start() if (cap.isOpened()== False): print("Error opening video stream or file") while(cap.isOpened()): ret, frame = cap.read() if ret == True: cv2.imshow('Frame',frame) fps.update() if cv2.waitKey(25) & 0xFF == ord('q'): break else: break cap.release() fps.stop() print("[INFO] elapsed time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) cv2.destroyAllWindows() I see this output: [INFO] approx. FPS: 38.06 [INFO] elapsed time: 44.67 Also the length of original video is 70 seconds. Why I see different results? I've been trying to read a video (format of videos is mp4) and do some operations in every 30th frame. Firstly, It works OK. But after some frames, I see this error: File "C:\Users\mustafa\Desktop\vidoe_deneme\opencv_object_tracker.py", line 22, in <module> frame = frame[:, :, ::-1] TypeError: 'NoneType' object is not subscriptable I see this error on every video I've tried (Not only on a one video). What is the problem? import cv2 from imutils.video import VideoStream from imutils.video import FPS cap = cv2.VideoCapture('k.mp4') totalFrames = 0 # start the frames per second throughput estimator fps = FPS().start() print('before video') # loop over frames from the video stream while cap.isOpened(): ret,frame = cap.read() frame = frame[:, :, ::-1] h, w = frame.shape[:2] rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) if totalFrames % 30 == 0: cv2.imshow("Frame", rgb) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break totalFrames += 1 fps.update() # stop the timer and display FPS information fps.stop() print("[INFO] elapsed time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) cv2.destroyAllWindows() How to use FPS module Hi, First I've created a video using import cv2 import numpy as np import glob img_array = [] fo I've tried. I see same error on the last frame of the video. @supra56 It doesn't finish. Error throws out before the end frame 'NoneType' object is not subscriptable error after read some fames I've been trying to read a video (format of videos is 'NoneType' object is not subscriptable error after read some fames I've been trying to read a video and do some operatio How to read every Nth frame in a video Hi, I've been trying to read every Nth frame in a video (to improve performance). Should destroyAllWindows and release functions be used in Python? Hi, i wrote a simple script to read frames from webcam I mean, there is just day light not any external source. dst = cv.cvtColor(frame,cv.COLOR_RGBA2GRAY) dst = dst & It is around %0.1. There is no any light source Mean of a frame always change? Hi everyone, i read frames from webcam and print mean of 3-channels of frame. import cv @berak i edited there (i posted that part wrong). so, still i see same problem @LBerger i mean, area of contours are always changing. I am just using a webcam not sensor. Captured frame always change Hi everyone, i am capturing a frame from webcam and processing that frame (finding size of @supra56 i couldn't understand.. should i move it outside from for loop
https://answers.opencv.org/users/421308/hernancrespo/?sort=recent
CC-MAIN-2021-25
refinedweb
1,093
70.09
Posted at: 15:08 on 20 August 2010 by Muhimbi Next up in our series about new features in the PDF Converter for SharePoint 4.0 and PDF Converter Services, we’ll showcase some of the new Export to PDF View Selection capabilities of our InfoPath converter. Being able to select which views to export is very useful as quite often different views are used for exporting a form to PDF. Sometimes using the Print View is good enough, but other times you need to export a different view or multiple views to PDF format. There are even occasions where different views are exported depending on the state of the data entered in the form. As always, the best way to illustrate this is by example. Use a special view for exporting to PDF In this scenario we have an Employee Review form with the following 3 views: - Data entry view: A view used for populating data using the InfoPath client or Forms Services. This is the default view. - Print View: A special view that is optimised for printing to a network laser printer. This is specified as View 1’s Print View. - PDF Export view: A separate view that is used to export the InfoPath form to PDF format as it contains some information that should only show up in exported PDF files. As View 1 is the default view and View 2 is the Print View for View 1, under normal circumstance the 2nd view is used for exporting to PDF. However, we want to use View 3 for this purpose. We can achieve this by starting the name of View 3 with “_MuhimbiView”. The Muhimbi PDF Converter will automatically detect all views that start with this name, export them all and merge them together into a single PDF file. Naturally these views can be hidden from the end user by marking them as such. This is a great solution if you know beforehand that you will always be exporting the same view(s) to PDF format. Determine at runtime which views to export The previous solution, using view names that start with “_MuhimbiView”, works great. However, sometimes you need to export a different view depending on the state of the data. For example, our Expense Claim form consists of the following Views: - Data Entry View 1: Used by the employee to report expenses. - Data Entry View 2: Used by the manager to add comments and additional information. - PDF Export View 1: The view that is used to export the form to PDF format before the manager has reviewed the form. - PDF Export View 2: The view that is used to export the form to PDF format after the manager has reviewed the form. OK, so how are we going to deal with this? Well, here comes the Muhimbi PDF Converter to the rescue! By adding a (hidden) text box named “_MuhimbiViews” (case sensitive and using the default ‘my’ namespace) to any of the views and populating it with the name of one or more comma separated view names, the Muhimbi PDF Converter will automatically pick up these names and export them to PDF format. If multiple views are specified then they are automatically concatenated together. In addition to adding the “_MuhimbiViews” text field to the form, all the developer of the form needs to do is to add a little bit of logic to the Submit event of the 2 data entry views that specify the correct view name to export in the “_MuhimbiViews” field. For details about how to dynamically specify which views to convert from your workflow see this post. View prioritisation rules To determine which view or views to export, the Muhimbi PDF Converter uses the following prioritisation rules: - Regardless of how a view or views are selected for export, if the selected view has a Print View specified than that view is given priority. - Version 4.1 and up: When using the web services interface, any ConversionViews specified in the ConverterSpecificSettings property will be converted. If this property is not set then the following rules will be used to determine which views to convert to PDF. A Web Services example can be found here. - If a field named “_MuhimbiViews” is found anywhere in the InfoPath form then the content of this field is used to determine which views to export. - If the previous field does not exist, is empty or the specified view name does not exist then the converter looks at all view names that start with “_MuhimbiView”. - If none of the previous options apply then the view marked as the Default View is exported. Do not use Muhimbi’s View selection features in combination with InfoPath's 'Print multiple views' facility. The latter is given priority when converting to PDF. When the final PDF file is assembled then all selected views are included first, followed by any converted attachments. Rules when converting to formats other than PDF As of version 6.0, Muhimbi’s PDF Converter can also convert InfoPath forms to MS-Word, Excel and HTML. There are some exceptions to the way View Selection works for these output formats. For details see this post. In summary, the new version of the PDF Converter adds flexible View selection features to make the life of InfoPath developers easier. As always, upgrades are completely free. Don’t hesitate to leave a comment below if you have any questions or contact us to discuss anything related to our products. . Labels: Articles, InfoPath, News, pdf, PDF Converter, PDF Converter Services, Products 13 Comments: Hi, now i'm using trial Muhimbi convert pdf. And this feature is very good for me to select view infopath when convert. I need to test before buy your product. I want to know when this feature will be release.. send info for me. minhtritp@yahoo.com By Anonymous, At 23 August, 2010 13:25 Hi, i'm also using trial Muhimbi convert pdf. This feature would be very handy for me to select view infopath when convert. I need to test before buy your product. I want to know when this feature will be release.. send info for me thomas.griffiths@biz-ict.com By Thomas, At 06 October, 2010 16:58 It will be out in the next few weeks, we are finalising documentation at the moment. Your company already has access tot he beta version. By Muhimbi, At 06 October, 2010 17:28 I have a Infopath form with ink picture control. I am not able to convert to pdf. Rest of the forms work fine. Any Idea of this issue. By Atul Verma, At 01 July, 2011 09:52 Hello Atul, Please send a copy of your XSN and XML file to support@muhimbi.com for further investigation. Thanks. By Muhimbi, At 03 July, 2011 15:58 I'm having exactly the same problem with a form with Ink Controls. Is there any resolution? By Stuart Pittwood, At 27 July, 2011 13:07 Hi Stuart, Please contact us on support@muhimbi.com as we have a patch available. By Muhimbi, At 27 July, 2011 13:38 The problem with Ink Controls has been resolved by the customer. It is related to the Ink controls not being deployed by default on Windows Server 2008 R2. Installing the "Ink & Handwriting" Windows Feature, part of the "Desktop Experience" on the server that runs the Muhimbi Conversion Service solves the problem. By Muhimbi, At 28 July, 2011 11:32 When getting problems with people picker controls then Internet Explorer may be configured to block (or prompt for) all ActiveX controls for the Internet Zone. As a solution log in as the account the conversion service runs under (or use a group policy) and make sure the following settings are configured: • Run ActiveX controls and plug-ins: Enable • Automatic prompting for ActiveX control: Disable By Muhimbi, At 07 March, 2012 12:09 when i try to use the hidden text box _MuhimbiViews the service can't find the text box and converts the default view and not the given view from text box. Do you have any idea? This is the XPath of my hidden textbox /my:contoso/my:_MuhimbiViews. By Arash Alvandi, At 18 October, 2012 14:29 Hi Arash, Please send this request to support@muhimbi.com and we'll take it from there. By Muhimbi, At 18 October, 2012 14:31 Can you provide instructions for the _muhumbi view for InfoPath 2013? By Anonymous, At 10 November, 2016 20:43 Hi, It works the same in all InfoPath versions. If you have any questions then please feel free to drop support@muhimbi.com a line. By Muhimbi, At 11 November, 2016 09:40
https://blog.muhimbi.com/2010/08/controlling-which-views-to-export-to.html
CC-MAIN-2019-51
refinedweb
1,455
62.48
FILESTREAM Overview Much data is unstruct. For a walkthrough that shows how to use FILESTREAM, see Getting Started with FILESTREAM Storage.: When a table contains a FILESTREAM column, each row must have a nonnull unique row ID.. Only the account under which the SQL Server service account runs is granted NTFS permissions to the FILESTREAM container. We recommend that no other account be granted permissions on the data container. Integrated Management Because. After you store data in a FILESTREAM column, you can access the files by using Transact-SQL transactions or by using Win32 APIs. Transact-SQL Access By using Transact-SQL, you can insert, update, and delete FILESTREAM data: You can use an insert operation to prepopulate a FILESTREAM field with a null value, empty value, or relatively short inline data. However, a large amount of data is more efficiently streamed into a file that uses Win32 interfaces. When you update a FILESTREAM field, you modify the underlying BLOB data in the file system. When a FILESTREAM field is set to NULL, the BLOB data associated with the field is deleted. You cannot use a Transact-SQL chunked update, implemented as UPDATE.Write(), to perform partial updates to the data. When you delete a row or delete or truncate a table that contains FILESTREAM data, you delete the underlying BLOB data in the file system. File System Streaming Access The Win32 streaming support works in the context of a SQL Server transaction. Within a transaction, you can use FILESTREAM functions to obtain a logical UNC file system path of a file. You then use the OpenSqlFilestream API to obtain a file handle. This handle can then be used by Win32 file streaming interfaces, such as ReadFile() and WriteFile(), to access and update the file by way of the file system. Because file operations are transactional, you cannot delete or rename FILESTREAM files through the file system. Statement Model if an UPDATE statement is completed. Storage Namespace In FILESTREAM, the Database Engine controls the BLOB physical file system namespace. A new intrinsic function, PathName, provides the logical UNC path of the BLOB that corresponds to each FILESTREAM cell in the table. The application uses this logical path to obtain the Win32 handle and operate on the BLOB data by using regular Win32 file system interfaces. The function returns NULL if the value of the FILESTREAM column is NULL. Transacted File System Access A new intrinsic function, GET_FILESTREAM_TRANSACTION_CONTEXT(), provides the token that represents the current transaction that the session is associated with. The transaction must have been started and not yet aborted or committed. By obtaining a token, the application binds the FILESTREAM file system streaming operations with a started transaction. The function returns NULL in case of no explicitly started transaction. All file handles must be closed before the transaction commits or aborts. If a handle is left open beyond the transaction scope, additional reads against the handle will cause a failure; additional writes against the handle will succeed, but the actual data will not be written to disk. Similarly, if the database or instance of the Database Engine shuts down, all open handles are invalidated. Transactional Durability With FILESTREAM, upon transaction commit, the Database Engine ensures transaction durability for FILESTREAM BLOB data that is modified from the file system streaming access. Write-Through from Remote Clients Remote file system access to FILESTREAM data is enabled over the Server Message Block (SMB) protocol. If the client is remote, no write operations are cached by the client side. The write operations will always be sent to the server. The data can be cached on the server side. We recommend that applications that are running on remote clients consolidate small write operations to make fewer write operations using larger data size. Creating memory mapped views (memory mapped I/O) by using a FILESTREAM handle is not supported. If memory mapping is used for FILESTREAM data, the Database Engine cannot guarantee consistency and durability of the data or the integrity of the database. Windows Logo Certification The FILESTREAM RsFx driver is certified for Windows Server 2008 R2. For more information and catalog file download, see SQL Server 2008 R2 FileStream Driver Windows Logo Certification in the Microsoft Download Center.
http://technet.microsoft.com/en-us/library/bb933993(d=printer).aspx
CC-MAIN-2014-15
refinedweb
706
54.22
apscheduler.triggers.cron¶ API¶ Trigger alias for add_job(): cron - class apscheduler.triggers.cron. CronTrigger(year=None, month=None, day=None, week=None, day_of_week=None, hour=None, minute=None, second=None, start_date=None, end_date=None, timezone=None, jitter=None)¶ Bases: apscheduler.triggers.base.BaseTrigger Triggers when current time matches all specified time constraints, similarly to how the UNIX cron scheduler works. - Parameters year (int|str) – 4-digit year month (int|str) – month (1-12) day (int|str) – day of month (1-31) week (int|str) – ISO week (1-53) day_of_week (int|str) – number or name of weekday (0-6 or mon,tue,wed,thu,fri,sat,sun) hour (int|str) – hour (0-23) minute (int|str) – minute (0-59) second (int|str) – second (0-59) start_date (datetime|str) – earliest possible date/time to trigger on (inclusive) end_date (datetime|str) – latest possible date/time to trigger on (inclusive) timezone (datetime.tzinfo|str) – time zone to use for the date/time calculations (defaults to scheduler timezone) jitter (int|None) – delay the job execution by jitterseconds at most Note The first weekday is always monday. Introduction¶ This is the most powerful of the built-in triggers in APScheduler. You can specify a variety of different expressions on each field, and when determining the next execution time, it finds the earliest possible time that satisfies the conditions in every field. This behavior resembles the “Cron” utility found in most UNIX-like operating systems. You can also specify the starting date and ending dates for the cron-style schedule through the start_date and end_date parameters, respectively. They can be given as a date/datetime object or text (in the ISO 8601 format). Unlike with crontab expressions, you can omit fields that you don’t need. Fields greater than the least significant explicitly defined field default to * while lesser fields default to their minimum values except for week and day_of_week which default to *. For example, day=1, minute=20 is equivalent to year='*', month='*', day=1, week='*', day_of_week='*', hour='*', minute=20, second=0. The job will then execute on the first day of every month on every year at 20 minutes of every hour. The code examples below should further illustrate this behavior. Note The behavior for omitted fields was changed in APScheduler 2.0. Omitted fields previously always defaulted to *. Expression types¶ The following table lists all the available expressions for use in the fields from year to second. Multiple expression can be given in a single field, separated by commas. Note The month and day_of_week fields accept abbreviated English month and weekday names ( jan – dec and mon – sun) respectively. Daylight saving time behavior¶ The cron trigger works with the so-called “wall clock” time. Thus, if the selected time zone observes DST (daylight saving time), you should be aware that it may cause unexpected behavior with the cron trigger when entering or leaving DST. When switching from standard time to daylight saving time, clocks are moved either one hour or half an hour forward, depending on the time zone. Likewise, when switching back to standard time, clocks are moved one hour or half an hour backward. This will cause some time periods to either not exist at all, or be repeated. If your schedule would have the job executed on either one of these periods, it may execute more often or less often than expected. This is not a bug. If you wish to avoid this, either use a timezone that does not observe DST, for instance UTC. Alternatively, just find out about the DST switch times and avoid them in your scheduling. For example, the following schedule may be problematic: # In the Europe/Helsinki timezone, this will not execute at all on the last sunday morning of March # Likewise, it will execute twice on the last sunday morning of October sched.add_job(job_function, 'cron', hour=3, minute=30) Examples¶ from apscheduler.schedulers.blocking import BlockingScheduler def job_function(): print "Hello World" sched = BlockingScheduler() # Schedules job_function to be run on the third Friday # of June, July, August, November and December at 00:00, 01:00, 02:00 and 03:00 sched.add_job(job_function, 'cron', month='6-8,11-12', day='3rd fri', hour='0-3') sched.start() You can use start_date and end_date to limit the total time in which the schedule runs: # Runs from Monday to Friday at 5:30 (am) until 2014-05-30 00:00:00 sched.add_job(job_function, 'cron', day_of_week='mon-fri', hour=5, minute=30, end_date='2014-05-30') The scheduled_job() decorator works nicely too: @sched.scheduled_job('cron', id='my_job_id', day='last sun') def some_decorated_task(): print("I am printed at 00:00:00 on the last Sunday of every month!") To schedule a job using a standard crontab expression: sched.add_job(job_function, CronTrigger.from_crontab('0 0 1-15 may-aug *')) The jitter option enables you to add a random component to the execution time. This might be useful if you have multiple servers and don’t want them to run a job at the exact same moment or if you want to prevent jobs from running at sharp hours: # Run the `job_function` every sharp hour with an extra-delay picked randomly in a [-120,+120] seconds window. sched.add_job(job_function, 'cron', hour='*', jitter=120)
https://apscheduler.readthedocs.io/en/3.x/modules/triggers/cron.html
CC-MAIN-2022-40
refinedweb
873
52.29
We begin our exploration of RPC programming by converting a simple program with a single local function call into a clientserver configuration with a single RPC. Once generated, this RPC-based program can be run in a distributed setting whereby the server process, which will contain the function to be executed, can reside on a host different from the client process. The program that we will convert (Program 9.2) is a C program [4] that invokes a single local function, print_hello , which generates the message Hello, world . As written, the print_hello function will display its message and return to the function main the value returned from printf . The returned value indicates whether printf was successful in carrying out its action. [5] [4] Up to this point, our examples have been primarily C++-based. Due to the inability of the compiler to handle full blown C++ code in conjunction with rpcgen -generated output, we will stick to C program examples in this section. Think of this as an opportunity to brush up on your C programming skills! [5] Many programmers are not aware that printf returns a value. However, a pass of any C program with a printf function through the lint utility will normally return a message indicating that the value returned by printf is not being used. Program 9.2 A simple C program to display a message. File : hello.c /* A C program with a local function */ #include + int print_hello( ); int main( ){ printf("main : Calling function. "); if (print_hello()) 10 printf("main : Mission accomplished. "); else printf("main : Unable to display message."); return 0; } + int print_hello( ) { return printf("funct: Hello, world. "); } In its current configuration, the print_hello function and its invocation reside in a single source file. The output of Program 9.2 when compiled and run is shown in Figure 9.5. Figure 9.5 Output of Program 9.2. linux$ hello main : Calling function. funct: Hello, world. main : Mission accomplished The first step in converting a program with a local function call to an RPC is for the programmer to create a protocol definition file. This file will help the system keep track of what procedures are to be associated with the server program. The definition file is also used to define the data type returned by the remote procedure and the data types of its arguments. When using RPC, the remote procedure is part of a remote program that runs as the server process. The RPC language is used to define the remote program and its component procedures. The RPC language is actually XDR with the inclusion of two extensionsthe program and version types. Appendix C addresses the syntax of the RPC language. For the diligent, the manual pages on xdr provide a good overview of XDR data type definitions and syntax. Figure 9.6 contains the protocol definition file for the print_hello function. Syntactically, the RPC language is a mix of C and Pascal. By custom, the extension for protocol definition files is .x . The keyword program marks the user -defined identifier DISPLAY_PRG as the name of the remote procedure program. [6] The program name, like the program name in a Pascal program, does not need to be the same as the name of the executable file. The program block encloses a group of related remote procedures. Nested within the program definition block is the keyword version followed by a second user-generated identifier, DISPLAY_VER , which is used to identify the version of the remote procedure. It is permissible to have several versions of the same procedure, each indicated by a different integer value. The ability to have different versions of the same procedure eases the upgrade process when updating software by facilitating backward compatibility. If the number of arguments, the data type of an argument, or the data type returned by the function change, the version number should be changed. [6] Most often, the identifiers placed in the protocol definition file are in capitals. Note that this is a convention, not a requirement. Figure 9.6 Protocol definition file hello.x . File : hello.x /* This is the protocol definition file. The programmer writes this file using the RPC language. This file is passed to the protocol generator rpcgen. Every remote procedure is part of + a remote program. Each procedure has a name and number. A version number is also supplied so different versions of the same procedure may be generated. */ program DISPLAY_PRG { 10 version DISPLAY_VER { int print_hello( void ) = 1; } = 1; } = 0x20000001; As this is our first pass at generating a remote procedure, the version number is set to 1 after the closing brace for the version block. Inside the version block is the declaration for the remote procedure (line 11). [7] A procedure number follows the remote procedure declaration. As there is only one procedure defined, the value is set to 1. An eight-digit hexadecimal program number follows the closing brace for the program block. The program, version, and procedure numbers form a triplet that uniquely identifies a specific remote procedure. To prevent conflicts, the numbering scheme shown in Table 9.2 should be used in assigning version numbers . [7] If the procedure name is placed in capitals, the RPC compiler, rpcgen , will automatically convert it to lowercase during compilation. Protocol specifications can be registered with Sun by sending a request (including the protocol definition file) to rpc@sun.com . Accepted specifications will receive a unique program number from Sun (in the range 000000001FFFFFFF). Table 9.2. RPC Program Numbers. A check of the file /etc/rpc on your system will display a list of some of the RPC programs (and their program numbers) known to the system. As shown below, the name of the protocol definition file is passed to the RPC protocol compiler, rpcgen , on the command line $ rpcgen -C hello.x The rpcgen compiler produces the requisite C code to implement the defined RPCs. There are a number of command-line options for rpcgen , of which we will explore only a limited subset. A summary of the command-line options and syntax for rpcgen is given in Figure 9.7. Figure 9.7 Command-line options for rpcgen . usage: rpcgen infile rpcgen [-abkCLNTM][-Dname[=value]] [-i size] [-I [-K seconds]] [-Y path] infile rpcgen [-c -h -l -m -t -Sc -Ss -Sm] [ -M generate MT-safe code -Sm generate makefile template -t generate RPC dispatch table -T generate code to support RPC dispatch tables -Y path directory name to find C preprocessor (cpp) In our invocation, we have specified the -C option requesting rpcgen output conform to the standards for ANSI C. While some versions of rpcgen generate ANSI C output by default, the extra keystrokes ensure rpcgen generates the type of output you want. When processing the hello.x file, rpcgen creates three output filesa header file, a client stub, and a server stub file. Again, by default rpcgen gives the same name to the header file as the protocol definition file, replacing the .x extension with .h . [8] In addition, the client stub file is named hello_clnt.c (the rpcgen source file name with _clnt.c appended), and the server stub file is named hello_svc.c (using a similar algorithm). Should the default naming convention be too restrictive , the header file as well as the client and server stub files can be generated independently and their names uniquely specified. For example, to generate the header file with a uniquely specified name, rpcgen would be passed the following options and file names: [8] This can be a troublesome default if, per chance, you have also generated your own local header file with the same name and extension. linux$ rpcgen -C -h -o unique_file_name hello.x With this invocation, rpcgen will generate a header file called unique_file_name.h . Using a similar technique, unique names for the client and server stub files can be specified with the -Sc and -Ss options (see Figure 9.7 for syntax details). The contents of the header file, hello.h , generated by rpcgen is shown in Figure 9.8. Figure 9.8 File hello.h generated by rpcgen from the protocol definition file hello.x . File : hello.h /* * Please do not edit this file. * It was generated using rpcgen. */ + #ifndef _HELLO_H_RPCGEN #define _HELLO_H_RPCGEN #include 10 #ifdef __cplusplus extern "C" { #endif + #define DISPLAY_PRG 0x20000001 #define DISPLAY_VER 1 20 #if defined(__STDC__) defined(__cplusplus) #define print_hello 1 extern int * print_hello_1(void *, CLIENT *); extern int * print_hello_1_svc(void *, struct svc_req *); extern int display_prg_1_freeresult (SVCXPRT *, xdrproc_t, caddr_t); + #else /* K&R C */ #define print_hello 1 extern int * print_hello_1(); extern int * print_hello_1_svc(); 30 extern int display_prg_1_freeresult (); #endif /* K&R C */ #ifdef __cplusplus } + #endif #endif /* !_HELLO_H_RPCGEN */ The hello.h file created by rpcgen is referenced as an include file in both the client and server stub files. The #ifndef _HELLO_H_RPCGEN , #define _HELLO_H_RPCGEN , and #endif preprocessor directives prevent the hello.h file from being included multiple times. Within the file hello.h , the inclusion of the file , as noted in its internal comments, ". . . just includes the billions of rpc header files necessary to do remote procedure calling ." [9] The variable __cplusplus (see line 20) is used to determine if a C++ programming environment is present. In a C++ environment, the compiler internally adds a series of suffixes to function names to encode the data types of its parameters. These new "mangled" function names allow C++ to check functions to ensure parameters match correctly when the function is invoked. The C compiler does not provide the mangled function names that the C++ compiler needs. The C++ compiler has to be warned that standard C linking conventions and non-mangled function names are to be used. This is accomplished by the lines following the #ifdef __cplusplus compiler directive. [9] While this comment is somewhat tongue-in-cheek, it is not all that farfetched (check it out)! The program and version identifiers specified in the protocol definition file are found in the hello.h file, as defined constants (lines 17 and 18). These constants are assigned the value specified in the protocol definition file. Since we indicated the -C option to rpcgen (standard ANSI C), the if branch of the preprocessor directive (i.e., #if defined (__STDC__) ) contains the statements we are interested in. If the remote procedure name in the protocol definition file was specified in uppercase, it is mapped to lowercase in the header file. The procedure name is defined as an integer and assigned the value previously given as its procedure number. Note that we will find this defined constant used again in a switch statement in the server stub to select the code to be executed when calling the remote procedure. Following this definition are two print_hello function prototypes . The first prototype, print_hello_1 , is used by the client stub file. The second, print_hello_1_svc , is used by the server stub file. The naming convention used by rpcgen is to use the name of the remote procedure as the root and append an underscore (_), version number (1), for the client stub, and underscore, version number, underscore , and svc for the server. The else branch of the preprocessor directive contains a similar set of statements that are used in environments that do not support standard C prototyping. Before we explore the contents of the client and server stub files created by rpcgen , we should look at how to split our initial program into client and server components . Once the initial program (for example hello.c ) is split, and we have run rpcgen , we will have the six files shown in Figure 9.9 available to us. Figure 9.9. Client-server files and relationships. We begin with writing the client component. As in the initial program, the client invokes the print_hello function. However, in our new configuration, the code for the print_hello function, which used to be a local function, resides in a separate program that is run by the server process. The code for the client component program, which has been placed in a file named hello_client.c , is shown in Program 9.3. Program 9.3 The client program hello_client.c. File : hello_client.c /* The CLIENT program: hello_client.c This will be the client code executed by the local client process. */ + #include #include "hello.h" /* Generated by rpcgen from hello.x */ int main(int argc, char *argv[]) { CLIENT *client; 10 int *return_value, filler; char *server; /* We must specify a host on which to run. We will get the host name from the command line as argument 1. + */ if (argc != 2) { fprintf(stderr, "Usage: %s host_name ", *argv); exit(1); } 20 server = argv[1]; /* Generate the client handle to call the server */ if ((client=clnt_create(server, DISPLAY_PRG, + DISPLAY_VER, "tcp")) == (CLIENT *) NULL) { clnt_pcreateerror(server); exit(2); } printf("client : Calling function. "); 30 return_value = print_hello_1((void *) &filler, client); if (*return_value) printf("client : Mission accomplished. "); else printf("client : Unable to display message. "); + return 0; } While much of the code is similar to the original hello.c program, some changes have been made to accommodate the RPC. Let's examine these changes point by point. At line 6 the file hello.h is included. This file, generated by rpcgen and whose contents were discussed previously, is assumed to reside locally. In this example, we pass information from the command line to the function main in the client program. Therefore, the empty parameter list for main has been replaced with standard C syntax to reference the argc and argv parameters. Following this, in the declaration section of the client program, a pointer to the data type CLIENT is allocated. A description of the CLIENT data type is shown in Figure 9.10. The CLIENT typedef is found in the include file . The reference to the CLIENT data structure will be used when the client handle is generated. Following the declarations in Program 9.3 is a section of code to obtain the host name on which the server process will be running. In the previous invocation, this was not a concern, as all code was executed locally. However, in this new configuration, the client process must know the name of the host where the server process is located; it cannot assume the server program is running on the local host. The name of the host is passed via the command line as the first argument to hello_client . As written, there is no checking to determine if a valid, reachable host name has been passed. The client handle is created next (line 24). This is done with a call to the clnt_create library function. The clnt_create library function, which is part of a suite of remote procedure functions, is summarized in Table 9.3. Figure 9.10 The CLIENT data structure. struct CLIENT { AUTH *cl_auth; /* authenticator */ struct clnt_ops { enum clnt_stat (*cl_call) (CLIENT *, u_long, xdrproc_t, caddr_t, xdrproc_t, caddr_t, struct timeval); /* call remote procedure */ void (*cl_abort) (void); /* abort a call */ void (*cl_geterr) (CLIENT *, struct rpc_err *); /* get specific error code */ bool_t (*cl_freeres) (CLIENT *, xdrproc_t, caddr_t); /* frees results */ void (*cl_destroy) (CLIENT *); /* destroy this structure */ bool_t (*cl_control) (CLIENT *, int, char *); /* the ioctl() of rpc */ } *cl_ops; caddr_t cl_private; /* private stuff */ }; Table 9.3. Summary of the clnt_create Library Call. The clnt_create library call requires four arguments. The first, host , a character string reference, is the name of the remote host where the server process is located. The next two arguments, prog and vers , are, respectively, the program and version number. These values are used to indicate the specific remote procedure. Notice the defined constants generated by rpcgen are used for these two arguments. The proto argument is used to designate the class of transport protocol. In Linux, this argument may be set to either tcp or udp . Keep in mind that UDP (Unreliable Datagram Protocol) encoded messages are limited to 8KB of data. Additionally, UDP is, by definition, less reliable than TCP (Transmission Control Protocol). However, UPD does require less system overhead. Table 9.4. Summary of the clnt_pcreateerror Library Call. If the clnt_create library call fails, it returns a NULL value. If this occurs, as shown in the example, the library routine clnt_pcreateerror can be invoked to display a message that indicates the reason for failure. See Table 9.4. The error message generated by clnt_pcreateerror , which indicates why the creation of the client handle failed, are appended to the string passed as clnt_pcreateerror 's single argument (see Table 9.5 for details). The argument string and the error message are separated with a colon , and the entire message is followed by a newline. If you want more control over the error messaging process, there is another library call, clnt_spcreateerror (char *s ) , that will return an error message string that can be incorporated in a personalized error message. In addition, the cf_stat member of the external structure rpc_createerr may be examined directly to determine the source of the error. Table 9.5. clnt_creat Error Messages. Returning to the client program, the prototype for the print_hello function has been eliminated. The function prototype is now in the hello.h header file. The invocation of the print_hello function uses its new name, print_hello_1 . The function now returns not an integer value but a pointer to an integer, and has two arguments (versus none). By design, all RPCs return a pointer reference. In general, all arguments passed to the RPC are passed by reference, not by value. As this function originally did not have any parameters, the identifier filler is used as a placeholder. The second argument to print_hello_1 , client , is the reference to the client structure returned by the clnt_create call. The server component, which now resides in the file hello_server.c , is shown in Program 9.4. Program 9.4 The hello_server.c component. File : hello_server.c /* The SERVER program: hello_server.c This will be the server code executed by the "remote" process */ + #include #include "hello.h" /* is generated by rpcgen from hello.x */ int * print_hello_1_svc(void * filler, struct svc_req * req) { static int ok; 10 ok = printf("server : Hello, world. "); return (&ok); } The server component contains the code for the print_hello function. Notice that to accommodate the RPC, several things have been added and/or modified. First, as noted in the discussion of the client program, the print_hello function now returns an integer pointer, not an integer value (line 7). In this example, the address that is to be returned is associated with the identifier ok . This identifier is declared to be of storage class static (line 9). It is imperative that the return identifier referenced be of type static , as opposed to local. Local identifiers are allocated on the stack, and a reference to their contents would be invalid once the function returns. The name of the function has had an additional _1 appended to it (the version number). As the -C option was used with rpcgen , the auxiliary suffix _svc has also been added to the function name. Do not be concerned by the apparent mismatch of function names. The mapping of the function invocation as print_hello_1 in the client program to print_hello_1_svc in the server program is done by the code found in the stub file hello_svc.c produced by rpcgen . The first argument passed to the print_hello function is a pointer reference. If needed, multiple items (representing multiple parameters) can be placed in a structure and the reference to the structure passed. In newer versions of rpcgen , the -N flag can be used to write multiple argument RPCs when a parameter is to be passed by value, not reference, or when a value, not a pointer reference, is to be returned by the RPC. A second argument, struct svc_req *req , has also been added. This argument will be used to communicate invocation information. The client component (program) is compiled first. When only a few files are involved, a straight command-line compilation sequence is adequate. Later we will discuss how to generate a make file to automate the compilation process. The compiler is passed the names of the two client files, hello_client.c (which we wrote) and hello_clnt.c (which was generated by rpcgen ). We specify the executable to be placed in the file client . Figure 9.11 shows details of the compilation command. Figure 9.11 Compiling the client component. linux$ gcc hello_client.c hello_clnt.c -o client The server component (program) is compiled in a similar manner (Figure 9.12). Figure 9.12 Compiling the server component. linux$ gcc hello_server.c hello_svc.c -o server Initially, we test the program by running both the client and server programs on the same workstation. We begin by invoking the server by typing its name on the command line. The server process is not automatically placed in the background, and thus a trailing & is needed. [10] A check of the ps command will verify the server process is running (see Figure 9.13). [10] This is just the opposite of what happens in a Sun Solaris environment where no trailing & is needed, as the process is automatically placed in the background. Figure 9.13 Running the server program and checking for its presence with ps. linux$ server & [1] 21149 [linux$ ps -ef grep server . . . gray 21149 15854 0 08:09 pts/5 00:00:00 server gray 21154 15854 0 08:10 pts/5 00:00:00 grep server The ps command reports that the server process, in this case process ID 21149, is in memory. Its parent process ID is 15854 (in this case the login shell) and its associated controlling terminal device is listed as pts/5 . The server process will remain in memory even after the user who initiated it has logged out. When generating and testing RPC programs, it is important the user remember to remove extraneous RPC-based server type processes before they log out. When the process is run locally, the client program is invoked by name and passed the name of the current workstation. When this is done, the output will be as shown in Figure 9.14. Notice that since our system has an existing program called client that resides in the /usr/sbin directory, the call to our client program is made with a relative reference (i.e., ./client ). Figure 9.14 Running the client program on the same host as the server program. linux$ ./client linux client : Calling function. server : Hello, world. client : Mission accomplished. While our clientserver application still needs some polishing, we can test it in a setting whereby the server runs on one host and the client on another. Say we have the setting shown in Figure 9.15, where one host is called medusa and the other linux . Figure 9.15. Running the client program on a remote host. On the host linux the server program is run in the background. On the host medusa the client program is passed the name of the host running the server program. Interestingly, on the host medusa the messages Calling function. and Mission accomplished . are displayed, but the message Hello, world . is displayed on the host linux . This is not surprising, as each program writes to its standard output, which in turn is associated with a controlling terminal (in our example this is the same terminal that is associated with the user's login shell). However, it is just as likely that the server program will write to its standard output, but what it has written will not be seen. This happens when there is no controlling terminal device associated with the server process. Remember that the server process remains in memory until removed. It is not removed when the user logs out. However, when the user does log out, the operating system drops the controlling terminal device for the process (a call to ps will list the controlling terminal device for the process as ? ). If, in a standard setting, there is no controlling terminal device associated with a process, anything the process sends to standard output goes into the bit bucket! There are several ways of correcting this problem. First, the output from the server could be hardcoded to be displayed on the console. In this scenario, the server would, upon invocation, execute an fopen on the /dev/console device. The FILE pointer returned by the fopen call could then be used with the fprintf function to display the output on the console. Unfortunately, there is a potential problem with this solution: The user may not have access to the console device. If this is so, the fopen will fail. A second approach is to pass the console device of the client process to the server as the first parameter of the RPC. This is a somewhat better solution, but will still fail when the client and server processes are on different workstations with different output devices. A third approach is to have the server process return its message to the client and have the client display it locally. We should also examine the two RPC stub files generated by rpcgen . The hello_clnt.c file is quite small (Figure 9.16). This file contains the actual call to the print_hello_1 function. Figure 9.16 The hello_clnt.c file. File : hello_clnt.c /* * Please do not edit this file. * It was generated using rpcgen. */ + #include /* for memset */ #include "hello.h" /* Default timeout can be changed using clnt_control() */ 10 static struct timeval TIMEOUT = { 25, 0 }; int * print_hello_1(void *argp, CLIENT *clnt) { static int clnt_res; + memset((char *)&clnt_res, 0, sizeof(clnt_res)); if (clnt_call (clnt, print_hello, (xdrproc_t) xdr_void, (caddr_t) argp, (xdrproc_t) xdr_int, (caddr_t) &clnt_res, TIMEOUT) != RPC_SUCCESS) { 20 return (NULL); } return (&clnt_res); } As we are using rpcgen to reduce the complexity of the RPC, we will not formally present the clnt_call . However, in passing, we note that the clnt_call function (which actually does the RPC) is passed, as its first argument, the client handle that was generated from the previous call to clnt_creat . The second argument for clnt_call is obtained from the hello.h include file and is actually the print _ hello constant therein. The third and fifth arguments are references to the XDR data encoding/ decoding routines. Sandwiched between these arguments is a reference, argp , to the initial argument that will be passed to the remote procedure by the server process. The sixth argument for clnt_creat is a reference to the location where the return data will be stored. The seventh and final argument is the TIMEOUT value. While the cautionary comments indicate you should not edit this file, and in general you should not, the TIMEOUT value can be changed from the default of 25 to some other reasonable user-imposed maximum. The code in the hello_svc.c file is much more complex and, in the interest of space, not presented here. Interested readers are encouraged to enter the protocol definition in hello.x and to generate and view the hello_svc.c file. At this juncture it is sufficient to note that the hello_svc.c file contains the code for the server process. Once invoked, the server process will remain in memory. When notified by a client process, it will execute the print_hello_1_svc function. Programs and Processes Processing Environment Using Processes Primitive Communications Pipes Message Queues Semaphores Shared Memory Remote Procedure Calls Sockets Threads Appendix A. Using Linux Manual Pages Appendix B. UNIX Error Messages Appendix C. RPC Syntax Diagrams Appendix D. Profiling Programs
https://flylib.com/books/en/1.23.1/transforming_a_local_function_call_into_a_remote_procedure.html
CC-MAIN-2020-24
refinedweb
4,548
56.66
The system must report a fault in terms of the impact it has on the ability of the device to provide service. Typically, loss of service is expected when: A PIO or DMA error is detected. Data corruption is detected. The device is locked or hung (for example, when a command never completes). A condition has occurred that the driver does not handle because it was regarded as impossible when the driver was designed. If the device state, returned by ddi_get_devstate(9F), indicates that the device is not usable, the driver should reject all new and outstanding I/O requests and return (if possible) an appropriate error code (for example, EIO). For a STREAMS driver, M_ERROR or M_HANGUP, as appropriate, should be put upstream to indicate that the driver is not usable. The state of the device should be checked at each major entry point, optionally before committing resources to an operation, and after reporting a fault. If at any stage the device is found to be unusable, the driver should perform any cleanup actions that are required (for example, releasing resources) and return in a timely way. It should not attempt any retry or recovery action, nor does it need to report a fault. The state is not a fault, and it is already known to the framework and management agents. It should mark the current request and any other outstanding or queued requests as complete, again with an error indication if possible. The ioctl() entry point presents a problem in this respect: ioctl operations that imply I/O to the device (for example, formatting a disk) should fail if the device is unusable, while others (such as recovering error status) should continue to work. The state check might therefore need to be on a per-command basis. Alternatively, you can implement those operations that work in any state through another entry point or minor device mode, although this might be constrained by issues of compatibility with existing applications. Note that close() should always complete successfully, even if the device is unusable. If the device is unusable, the interrupt handler should return DDI_INTR_UNCLAIMED for all subsequent interrupts. If interrupts continue to be generated the eventual result is that the interrupt is disabled. This following function notifies the system that your driver has discovered a device fault. void ddi_dev_report_fault(dev_info_t *dip, ddi_fault_impact_t impact, ddi_fault_location_t location, const char *message); The impact parameter indicates the impact of the fault on the device's ability to provide normal service, and is used by the fault management components of the system to determine the appropriate action to take in response to the fault. This action can cause a change in the device state. A service-lost fault causes the device state to be changed to DOWN and a service-degraded fault causes the device state to be changed to DEGRADED. A device should be reported as faulty if: A PIO error is detected. Corrupted data is detected. The device has locked up. Drivers should avoid reporting the same fault repeatedly, if possible. In particular, it is redundant (and undesirable) for drivers to report any errors if the device is already in an unusable state (see ddi_get_devstate(9F)). If a hardware fault is detected during the attach process, the driver must report the fault by using ddi_dev_report_fault(9F) as well as by returning DDI_FAILURE.
http://docs.oracle.com/cd/E19455-01/806-7503/chaphard-3/index.html
CC-MAIN-2015-35
refinedweb
559
51.58
BlueSSLService alternatives and similar libraries Based on the "Socket" category. Alternatively, view BlueSSLService alternatives based on common mentions on social networks and blogs. Starscream9.8 0.0 L1 BlueSSLService VS StarscreamWebsockets in swift for iOS and OSX Socket.IO9.5 1.1 L2 BlueSSLService VS Socket.IOSocket.IO client for iOS/OS X. SwiftSocket8.4 0.0 L3 BlueSSLService VS SwiftSocketThe easy way to use sockets on Apple platforms SwiftWebSocket8.1 0.0 L1 BlueSSLService VS SwiftWebSocketA high performance WebSocket client library for swift. BlueSocket7.9 2.6 L1 BlueSSLService VS BlueSocketSocket framework for Swift using the Swift Package Manager. Works on iOS, macOS, and Linux. Socks5.9 0.0 L2 BlueSSLService VS Socks🔌 Non-blocking TCP socket layer, with event-driven server and client. SocketIO-Kit3.0 0.0 L4 BlueSSLService VS SocketIO-KitSocket.io iOS and OSX Client compatible with v1.0 and later WebSocket2.8 0.0 L3 BlueSSLService VS WebSocketWebSocket implementation for use by Client and Server RxWebSocket1.9 0.0 L4 BlueSSLService VS RxWebSocketReactive WebSockets SwiftDSSocket1.8 0.0 BlueSSLService VS SwiftDSSocketDispatchSource based socket framework written in pure Swift DNWebSocket0.8 0.0 BlueSSLService BlueSSLService or a related project? README BlueSSLService SSL/TLS Add-in framework for BlueSocket in Swift using the Swift Package Manager. Works on supported Apple platforms (using Secure Transport) and on Linux (using OpenSSL). Prerequisites Swift - Swift Open Source swift-4.0.0-RELEASEtoolchain (Minimum REQUIRED for latest release) - Swift Open Source swift-4.2-RELEASEtoolchain (Recommended) - Swift toolchain included in Xcode Version 10.0 (10A255) or higher. macOS - macOS 10.11.6 (El Capitan) or higher. - Xcode Version 9.0 (9A325) or higher using one of the above toolchains. - Xcode Version 10.0 (10A255) or higher using the included toolchain (Recommended). - Secure Transport is provided by macOS. iOS - iOS 10.0 or higher - Xcode Version 9.0 (9A325) or higher using one of the above toolchains. - Xcode Version 10.0 (10A255) or higher using the included toolchain (Recommended). - Secure Transport is provided by iOS. Linux - Ubuntu 16.04 (or 16.10 but only tested on 16.04) and 18.04. - One of the Swift Open Source toolchain listed above. - OpenSSL is provided by the distribution. Note: 1.0.x, 1.1.x and later releases of OpenSSL are supported. - The appropriate libssl-dev package is required to be installed when building. Other Platforms - BlueSSLService is NOT supported on watchOS since POSIX/BSD/Darwin sockets are not supported on the actual device although they are supported in the simulator. - BlueSSLService should work on tvOS but has NOT been tested. Note: See Package.swift for details. Build To build SSLService from the command line: % cd <path-to-clone> % swift build Testing To run the supplied unit tests for SSLService from the command line: % cd <path-to-clone> % swift build % swift test Using BlueSSLService Before starting The first you need to do is import both the Socket and SSLService frameworks. This is done by the following: import Socket import SSLService Creating the Configuration Both clients and server require at a minimum the following configuration items: - CA Certficate (either caCertificateFileor caCertificateDirPath) - Application certificate ( certificateFilePath) - Private Key file ( keyFilePath) or - Certificate Chain File ( chainFilePath) or, if using self-signed certificates: - Application certificate ( certificateFilePath) - Private Key file ( keyFilePath) or, if running on Linux (for now), - A string containing a PEM formatted certificate or, if running on macOS: - Certificate Chain File ( chainFilePath) in PKCS12 format or, - No certificate at all. BlueSSLService provides five ways to create a Configuration supporting the scenarios above. Only the last version is supported on Apple platforms. On Linux, ALL versions are supported. This is due to the limits imposed on the current implementation of Apple Secure Transport. init()- This API allows for the creation of default configuration. This is equivalent to calling the next initializer without changing any parameters. init(withCipherSuite cipherSuite: String? = nil, clientAllowsSelfSignedCertificates: Bool = true)- This API allows for the creation of configuration that does not contain a backing certificate or certificate chain. You can optionally provide a cipherSuite and decide whether to allow, when in client mode, use of self-signed certificates by the server. init(withCACertificatePath caCertificateFilePath: String?, usingCertificateFile certificateFilePath: String?, withKeyFile keyFilePath: String? = nil, usingSelfSignedCerts selfSigned: Bool = true, cipherSuite: String? = nil)- This API allows you to create a configuration using a self contained Certificate Authority (CA)file. The second parameter is the path to the Certificatefile to be used by application to establish the connection. The next parameter is the path to the Private Keyfile used by application corresponding to the Public Keyin the Certificate. If you're using self-signed certificates, set the last parameter to true. init(withCACertificateDirectory caCertificateDirPath: String?, usingCertificateFile certificateFilePath: String?, withKeyFile keyFilePath: String? = nil, usingSelfSignedCerts selfSigned: Bool = true, cipherSuite: String? = nil)- This API allows you to create a configuration using a directory of Certificate Authority (CA)files. These CAcertificates must be hashed using the Certificate Toolprovided by OpenSSL. The following parameters are identical to the previous API. init(withPEMCertificateString certificateString: String, usingSelfSignedCerts selfSigned: Bool = true, cipherSuite: String? = nil)- This API used when supplying a PEM formatted certificate presented as a String. NOTE: At present, this API is only available on Linux. init(withChainFilePath chainFilePath: String? = nil, withPassword password: String? = nil, usingSelfSignedCerts selfSigned: Bool = true, clientAllowsSelfSignedCertificates: Bool = false, cipherSuite: String? = nil)- This API allows you to create a configuration using a single Certificate Chain File(see note 2 below). Add an optional password (if required) using the third parameter. Set the third parameter to true if the certificates you are using are self-signed, otherwise set it to false. If configuring a client and you want that client to be able to connect to servers using self-signedcertificates, set the fourth parameter to true. Note 1: All Certificate and Private Key files must be PEM format. If supplying a certificate via a String, it must be PEM formatted. Note 2: If using a certificate chain file, the certificates must be in PEM format and must be sorted starting with the subject's certificate (actual client or server certificate), followed by intermediate CA certificates if applicable, and ending at the highest level (root) CA. Note 3: For the first two versions of the API, if your Private key is included in your certificate file, you can omit this parameter and the API will use the same file name as specified for the certificate file. Note 4: If you desire to customize the cipher suite used, you can do so by specifying the cipherSuite parameter when using one of the above initializers. If not specified, the default value is set to DEFAULT on Linux. On macOS, setting of this parameter is currently not supported and attempting to set it will result in unpredictable results. See the example below. Note 5: If you're running on macOS, you must use the last form of init for the Configuration and provide a certificate chain file in PKCS12 format, supplying a password if needed. Example The following illustrates creating a configuration (on Linux) using the second form of the API above using a self-signed certificate file as the key file and not supplying a certificate chain file. It also illustrates setting the cipher suite to ALL from the default: import SSLService ...) myConfig.cipherSuite = "ALL" ... Note: This example takes advantage of the default parameters available on the SSLService.Configuration.init function. Also, changing of the cipher suite on macOS is currently not supported. Creating and using the SSLService The following API is used to create the SSLService: init?(usingConfiguration config: Configuration) throws- This will create an instance of the SSLServiceusing a previously created Configuration. Once the SSLService is created, it can applied to a previously created Socket instance that's just been created. This needs to be done before using the Socket. The following code snippet illustrates how to do this (again using Linux). Note: Exception handling omitted for brevity. import Socket import SSLService ... // Create the configuration...) // Create the socket... var socket = try Socket.create() guard let socket = socket else { fatalError("Could not create socket.") } // Create and attach the SSLService to the socket... // - Note: if you're going to be using the same // configuration over and over, it'd be // better to create it in the beginning // as `let` constant. socket.delegate = try SSLService(usingConfiguration: myConfig) // Start listening... try socket.listen(on: 1337) The example above creates a SSL server socket. Replacing the socket.listen function with a socket.connect would result in an SSL client being created as illustrated below: // Connect to the server... try socket.connect(to: "someplace.org", port: 1337) SSLService handles all the negotiation and setup for the secure transfer of data. The determining factor for whether or not a Socket is setup as a server or client Socket is API which is used to initiate a connection. listen() will cause the Socket to be setup as a server socket. Calling connect() results a client setup. Extending Connection Verification SSLService provides a callback mechanism should you need to specify additional verification logic. After creating the instance of SSLService, you can set the instance variable verifyCallback. This instance variable has the following signature: public var verifyCallback: ((_ service: SSLService) -> (Bool, String?))? = nil Setting this callback is not required. It defaults to nil unless set. The first parameter passed to your callback is the instance of SSLService that has this callback. This will allow you to access the public members of the SSLService instance in order to do additional verification. Upon completion, your callback should return a tuple. The first value is a Bool indicating the sucess or failure of the routine. The second value is an optional String value used to provide a description in the case where verification failed. In the event of callback failure, an exception will be thrown by the internal verification function. Important Note: To effectively use this callback requires knowledge of the platforms underlying secure transport service, Apple Secure Transport on supported Apple platforms and OpenSSL on Linux. Skipping Connection Verification If desired, SSLService can skip the connection verification. To accomplish this, set the property skipVerification to true after creating the SSLService instance. However, if the verifyCallback property (described above) is set, that callback will be called regardless of this setting. The default for property is false. It is NOT recommended that you skip the connection verification in a production environment unless you are providing verification via the verificationCallback. Community We love to talk server-side Swift and Kitura. Join our Slack to meet the team! License This library is licensed under Apache 2.0. Full license text is available in LICENSE. *Note that all licence references and agreements mentioned in the BlueSSLService README section above are relevant to that project's source code only.
https://swift.libhunt.com/bluesslservice-alternatives
CC-MAIN-2022-33
refinedweb
1,785
50.94
Understanding Theme UI: 6 - The Hacks In this post i'm going to explain some of my Theme UI "hacks"... to be honest they're not really hacks but you won't necessarily find these in the docs since they mostly relate to standard CSS selectors but it wasn't immediately obvious to me that the sx prop can be used in this way. If you're new to Theme UI i'd suggest having a read of the first five posts in this series to bring you up to speed Variants - Buttons There are some obvious uses for variants that are explained in the docs but one method I use a fair bit isn't covered by the docs so here's one way you can use variants inside the theme object. Below is a method I use for styling button variants and as the docs mention the default variant is theme.buttons.primary. You'll see in the source below that if the <Button /> component is used without a variant prop it'll default to the styles in defined in theme.buttons.primary Src 👇 <Button>Primary</Button><Button variant="secondary">Secondary</Button><Button variant="ghost">Ghost</Button> Output 👇 Theme 👇 // path-to-theme/index.jsexport default {colors: {text: '#FFFFFF',muted: '#8b87ea',primary: '#f056c7',secondary: '#c39eff',background: '#131127',},buttons: {primary: {backgroundColor: 'primary',borderRadius: 0,color: 'text',cursor: 'pointer',minWidth: 120,px: 3,py: 2,},secondary: {variant: 'buttons.primary',color: 'background',backgroundColor: 'secondary',},ghost: {variant: 'buttons.primary',color: 'muted',backgroundColor: 'background',},},} To create alternative button variants I define them in the same buttons object and give each one a name. e.g "secondary" and "ghost". Like in normal CSS you'll want the additional button variants to extend the default class, in this case it's called "primary" this is so you don't have to re-define or duplicate CSS properties for the padding, min-with etc and the way to do this in Theme UI is to use the variant "prop" inside the theme object. Looking at the object for secondary you'll notice it has a variant of buttons.primary this means it'll extend all the CSS properties from the default button and will then apply / overwrite new CSS properties for it's color and backgroundColor. You can use this approach for as many button variants as you require, but do always "extend" using the variant before applying new styles. Variants - typography This one is a bit gnarly so strap in. The concept here is that the variant prop used on the component can be used to point to a specific set of styles in the theme, this can be any theme key, in the below example it's styles and as seen in the button example above you can also extend from a theme object using the variant key Src 👇 <Heading as='h3' variant='styles.h3'>Heading h3</Heading><Heading as='h4' variant='styles.h4'>Heading h4</Heading> Output 👇 Heading h3 Heading h4 Theme 👇 // path-to-theme/index.jsexport default {fonts: {heading: 'Inconsolata, monospace',},fontSizes: [12, 16, 18],text: {heading: {fontFamily: 'heading',fontSize: 2,},},styles: {h3: {variant: 'text.heading',color: 'secondary',},h4: {variant: 'text.heading',color: 'text',},},} First thing to note with typography is that you'll most likely want to use the as prop to determine the HTML dom node, but... this doesn't automatically mean HTML dom nodes map to styles. As an aside it's quite possible you'll want to style h(n) tags differently on different pages and by de-coupling as from variant is quite handy albeit a bit complicated to grasp at first. With that out of the way we can move on to variants. You'll see above in the above <Heading as='h3' variant='styles.h3'/> I map the variant to styles.h3 and if you look at the theme object for styles.h3 it first extends the styles defined in text.heading and then applies a new CSS property for color. These styles are from my theme: gatsby-theme-terminal and the typography treatments are quite simple but hopefully you can see from the above how to map from component usage to a theme key using the variant prop and from there you can extend from another theme key. If you're coming to Theme UI from .scss it's a bit like @extend and if you're coming from css-modules it's a bit like composes FYI The reason I point typography to styles is because the styles key is what Theme UI uses when styling HTML dom nodes found in markdown ( .md) or MDX ( .mdx) CSS selectors Every now and then you might run into an issue where you'll need to target a sibling or child of a Theme UI component. I had this recently when I used the Reach UI - menu-button. For the purposes of an example i've removed a lot of the code but the TLDR is that you can style any sibling or child from the sx prop using normal CSS selectors To target a child by id you can do this 👇 <Boxsx={{'#menu--1': {borderColor: 'primary',borderStyle: 'solid',borderWidth: '1px',},}}><div id="menu--1">...</div></Box> To target a child by class you can do this 👇 <Boxsx={{'.menu': {borderColor: 'primary',borderStyle: 'solid',borderWidth: '1px',},}}><div className="menu">...</div></Box> To target a child by data- attribute you can do this 👇 <Boxsx={{'[...</div></Box> To target an adjacent sibling by class you can do this 👇 <Boxsx={{'+ .menu': {borderColor: 'primary',borderStyle: 'solid',borderWidth: '1px',},}}/><div className="menu">...</div> To target a general sibling by class you can do this 👇 <Boxsx={{'~ .menu': {borderColor: 'primary',borderStyle: 'solid',borderWidth: '1px',},}}/><div className="menu">...</div><div className="menu menu-items">...</div> I've written a post about how to use "style objects" for use with styled-components but the same approach works with Theme UI. You can read more about "style objects" in this post: styled-components Style Objects Svg paths One "hack" I used extensively in BumHub was to target the Svg's <path> tags via a className to set their fill to colors defined in the theme. This way if you change any color values for use around your site all your Svg's will update the same as any other HTML dom nodes. the colors seen below are inherited from the theme used in this blog import React, { FunctionComponent } from 'react'import { Box } from 'theme-ui'export const LogoIcon: FunctionComponent = () => {return (<<g className="logo-detail"><path d="..." /><path d="..." /><path d="..." /><path d="..." /><path d="..." /></g><path className="logo-outline" d="..." /></g></Box>)} css keyframes This is a lesser understood part of CSS and arguably even more so with Theme UI as it's not mentioned in the docs anywhere. However it is possible to animate using CSS keyframes using keyframes from @emotion/react To help demonstrate how keyframes work here's a very simple loading component called <MrKeyframes /> and you can find the src here import React from 'react'import { Box, Grid } from 'theme-ui'import { keyframes } from '@emotion/react'export const MrKeyframes = () => {const size = '8px'const dots = new Array(10).fill(null)const animation = keyframes({'0%': {opacity: 1,},'20%': {opacity: 0,},'100%': {opacity: 1,},})return (<Gridsx={{gap: 1,p: 5,textAlign: 'center',justifyContent: 'center',}}>Loading<Gridsx={{gridAutoFlow: 'column',gap: 2,}}>{dots.map((dot, index) => (<Boxkey={index}sx={{animationDelay: `${index / 10}s`,animationDuration: '1.2s',animationTimingFunction: 'linear',animationIterationCount: 'infinite',animationName: animation.toString(),backgroundColor: 'primary',borderRadius: `${size}`,height: `${size}`,width: `${size}`,opacity: 0,}}/>))}</Grid></Grid>)} functional values I've talked a lot about how Theme UI maps CSS properties to specific theme objects, e.g color and background-color automatically map to colors... but if you need to access a value from your theme and map it to a different CSS property you can do so by using functional values The idea here is you pass the theme object on via an inline function and by using template literals you can construct any CSS value you need. Src 👇 <Boxsx={{boxShadow: (theme) => `0 0 7px 3px ${theme.colors.secondary}`,backgroundColor: 'surface',color: 'secondary',p: 3,}}>I'm a Box</Box> Output 👇 I'm sure i've implemented a number of other CSS methods using Theme UI on various projects but I can't think of any more right now. I'll endeavour to update this post as and when any new ones come to mind. That just about wraps up this series on Theme UI, if you have any questions please feel free to find me on Twitter Reactions Newsletter This is a sign-up to Queen Raae's Gatsby Newsletter because I don't really like people, or emails. She'll let you know when I have something new to share!
https://paulie.dev/posts/2021/02/theme-ui-alpha-6/
CC-MAIN-2022-05
refinedweb
1,462
51.07
XML - Managing Data Exchange/Google earth< XML - Managing Data Exchange Contents KML IntroductionEdit (most content here is directly quoted from the KML wikipedia article) KML (Keyhole Markup Language) is an XML-based Markup language for managing the display of three-dimensional geospatial data in the programs Google Earth, Google Maps, Google Mobile, ArcGIS Explorer and World Wind. (The word Keyhole is an earlier name for the software that became Google Earth; the software was produced in turn by Keyhole, Inc, which was acquired by Google in 2004. The term "Keyhole" actually honors the KH-11|KH reconnaissance satellites, the original eye-in-the-sky military reconnaissance system now some 30 years old.) The KML file specifies a set of features (placemarks, images, polygons, 3D models, textual descriptions, etc.) for display in Google Earth, Maps and Mobile. Each place always has a longitude and a latitude. Other data can make the view more specific, such as tilt, heading, altitude, which together define a "camera view". KML shares some of the same structural grammar as Geography Markup Language|GML[1]. Some KML information cannot be viewed in Google Maps or Mobile [2]. KML files are very often distributed as KMZ files, which are Data compression|zipped KML files with a .kmz extension. When a KMZ file is unzipped, a single "doc.kml" is found along with any overlay and icon images referenced in the KML. Example KML document: <?xml version="1.0" encoding="UTF-8"?> <kml xmlns=""> <Placemark> <description>New York City</description> <name>New York City</name> <Point> <coordinates>-74.006393,40.714172,0</coordinates> </Point> </Placemark> </kml> The MIME type associated to KML is application/vnd.google-earth.kml+xml. The MIME type associated to KMZ is application/vnd.google-earth.kmz . Basic KML Document TypesEdit For an XML document to recognize KML specific tags you must declare the KML namespace (listed below). <kml xmlns=""> You will see this declaration in all the example files listed. In order to see use the examples provided in this chapter you will need to copy and paste the text into any text editor. Next you will save the file as a .kml. This can be done by choosing "save as" and naming the file with a .kml extension (You might have to surround the name in quotes ie "test.kml"). PlacemarksEdit Placemarks simply make a clickable pinpoint inside Google Earth at an exact location based on coordinates. This can be useful for marking a point of interest or a beginning and ending destination to a trip. The example KML document in the introduction uses the Placemark tag. If you want to move the placemark to a different location all you would change are the coordinates. PathsEdit Paths are a series of connected coordinates that can be edited with line styles for a more bold appearance inside Google Earth. The height and color can be adjusted for a more exaggerated appearance and better clarity. The following example is a path from Atlanta, Georgia to Nashville, Tennessee. The code may look a bit complicated but it is mostly just styling/formatting tags with the 4 actual coordinates at near the end. So if you wanted to use the same style wall but just make a different path, all you would do is change the coordinates. <?xml version="1.0" encoding="UTF-8"?> <kml xmlns=""> <Document> <name>Paths</name> <description>Path from Atlanta to Nashville</description> <Style id="yellowLineGreenPoly"> <LineStyle> <color>7f00ffff</color> <width>4</width> </LineStyle> <PolyStyle> <color>7f00ff00</color> </PolyStyle> </Style> <Placemark> <name>Atlanta to Nashville</name> <description>Wall structured path</description> <styleUrl>#yellowLineGreenPoly</styleUrl> <LineString> <extrude>1</extrude> <tessellate>1</tessellate> <altitudeMode>absolute</altitudeMode> <coordinates> -84.40204442007513,33.75488573910702,83269 -84.37837132006098,33.82567285375923,83269 -84.79700041857893,35.30711817667424,83269 -86.79210094043326,36.15389499208452,83269 </coordinates> </LineString> </Placemark> </Document> </kml> OverlaysEdit Overlays are graphics that can be placed over an area in Google Earth marked by coordinates. These graphics can show how an area looked at a different point in time or during a special event (like a volcanic eruption). This overlay example comes from Google's KML samples webpage and shows what Mt. Etna looked like during an actual eruption. <?xml version="1.0" encoding="UTF-8"?> <kml xmlns=""> <Folder> <name>Ground Overlays</name> <description>Examples of ground overlays</description> <GroundOverlay> <name>Large-scale overlay on terrain</name> <description>Overlay shows Mount Etna erupting on July 13th, 2001.</description> <Icon> <href></href> </Icon> <LatLonBox> <north>37.91904192681665</north> <south>37.46543388598137</south> <east>15.35832653742206</east> <west>14.60128369746704</west> <rotation>-0.1556640799496235</rotation> </LatLonBox> </GroundOverlay> </Folder> </kml> You can see from the code that it takes the image etna.jpg and places it over the coordinates listed. PolygonsEdit Polygons are a neat feature of Google Earth that allow 3-D shapes to be molded anywhere in Google Earth. These shapes can be useful for making neat presentations or just showing the world a structure actually looks. This example is a polygon of Turner Field (The Atlanta Braves' home stadium) in Georgia. There is no styling on the polygon to keep the code simple. <?xml version="1.0" encoding="UTF-8"?> <kml xmlns=""> <Placemark> <name>Turner Field</name> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> -84.39024224888713,33.73459764262901,28 -84.38961532726215,33.73451197628319,28 -84.38830478530726,33.7350571795205,28 -84.38811742696677,33.73579651137399,28 -84.38856034410841,33.73618350237595,28 -84.38930790023139,33.73647497375488,28 -84.38997872537549,33.73655338302832,28 -84.39051294303495,33.73605785090994,28 -84.39056804786146,33.73528763589146,28 -84.39024224888713,33.73459764262901,28 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> </kml> As you can see from the code, the polygon's 3-D shape is made up from the latitude and longitude coordinates with the height being determined by the 3rd column value inside the coordinates tag (28 in this case). Google EarthEdit Google Earth is the name of Google's free software that is responsible for handling these KML documents. It is a virtual world created by a collage of satellite images where a user can manipulate the Earth in any way to see its landscapes, oceans, and cities. You can find more information and download the software here. Edit The user interface is fairly straight forward and is extremely easy for computer illiterate users to just jump right in and start exploring. You can completely ignore the toolbars and buttons if you want and just click and grab on the Earth to shake, spin, or roll it as you please. To zoom in, simply right-click and pull down or up depending on the speed which you prefer to "fall" toward the surface. If you would like help finding a location you can type in the location (in the form of City, State) in the "Fly to.." search box. Google Earth will then spin around and zoom in to the location entered. If you want to take a virtual vacation, Google has some preset locations saved in the window below "Fly to..." labeled "Sightseeing." Simply click on one of these locations and be taken to the location where pictures, articles, and comments can be all be viewed. Points of InterestEdit Points of interest that Google has already marked for users are marked with different color dots and icons. These can be clicked on for a variety of information ranging from a simple comment, to a panoramic photograph of that exact location. Advanced users, interested in the code make-up of such a feature, can right click on any of Google's marks and choose "Copy." This will copy all the code for that feature where you can just paste it into a text document to see all of the tags and references. More ReferencesEdit External linksEdit - KML Documentation - Developer Knowledge Base: KML in Google Earth - KML Developer Support group - KMLImporter importing placemarks into NASA World Wind - Use hierarchical maps (Mindmaps) to create and manage KML files and convert Excel data to KML. - Google Earth Connectivity Add-on for ArchiCAD 9 Other notesEdit Wikipedia in other languagesEdit - w:ar:كيه إم إل (Arabian) - w:de:Keyhole Markup Language (German) - w:es:KML (Spanish) - w:it:Keyhole Markup Language (Italian) - w:hu:Keyhole Markup Language (Hungarian) - w:nl:Keyhole Markup Language (Dutch) - w:pl:Keyhole Markup Language (Polish) - w:ru:KML (Russian)
https://en.m.wikibooks.org/wiki/XML_-_Managing_Data_Exchange/Google_earth
CC-MAIN-2018-26
refinedweb
1,383
51.89
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hi, I managed to simulate a parenting behavior in the xpresso node editor. You can see it in this illustration video below: It works well. As you can see, the child follows the parent even though it is not parented in the hierarchy. My problem is I want it to be in a script form (executes only once) rather than on a Xpresso editor (executes every time or live) child parent Here is my code so far. It is the same logic I used in the node editor. import c4d from c4d import gui # Main function def main(): parent = doc.SearchObject("parent") child = doc.SearchObject("child") child_parent_wrd_mat = child.GetUpMg() # Since there are no direct parent. This will be the default world matrix parent_wrd_mat = parent.GetMg() child_offset_mat = ~parent.GetMg() * child.GetMg() overall_mat = child_parent_wrd_mat * parent_wrd_mat * child_offset_mat # Rotating the Parent parent[c4d.ID_BASEOBJECT_REL_ROTATION,c4d.VECTOR_Y] += 0.5 # This should also move the child based on the parent's rotation but it does not # Same thing happens if I move the rotating parent before retreiving the matrices. # Again, I'm not sure how to implement it on a script. Only on a node. child.SetMg(overall_mat) c4d.EventAdd() # Execute main() if __name__=='__main__': main() You can also check the C4D File with the Xpresso Parent Behavior here: Regards, Ben That the current code doesn't work is obvious (after you change the "parent", you do not retrieve its matrix again so the change is not reflected in the matrix you apply to the "child"). That the code doesn't work either when you change the parent first is due to the matrices that you multiply. overall_mat contains parent_wrd_mat which is the parent's world matrix. It also contains child_offset_mat which contains the parent's inverted world matrix. Multiplying a matrix with its own inverse results in the unity matrix which amounts to no transformation at all. overall_mat parent_wrd_mat child_offset_mat So essentially you have multiplied away the parent's contribution to the overall_mat. What you need to do is to make the explicit transformation "in the middle": import c4d from c4d import gui def main(): parent = doc.SearchObject("parent") child = doc.SearchObject("child") transformToParentSystem = ~parent.GetMg() # Rotating the Parent parent[c4d.ID_BASEOBJECT_REL_ROTATION,c4d.VECTOR_X] += c4d.utils.DegToRad(45.0) parent[c4d.ID_BASEOBJECT_REL_POSITION,c4d.VECTOR_X] += 123 parent[c4d.ID_BASEOBJECT_REL_POSITION,c4d.VECTOR_Y] += -10 parent[c4d.ID_BASEOBJECT_REL_POSITION,c4d.VECTOR_Z] += 2 transformFromParentSystem = parent.GetMg() overall_mat = transformFromParentSystem * transformToParentSystem * child.GetMg() child.SetMg(overall_mat) c4d.EventAdd() if __name__=='__main__': main() Here, you first remember the transformation into the parent system in transformToParentSystem. Then you can make the desired transformation to the parent object. Then you remember the transformation out of the parent system in transformFromParentSystem. These two remembered matrices are now not the inverse of each other, as you have changed the parent in between! transformToParentSystem transformFromParentSystem Now, the overall matrix for the child is composed from: its own matrix, which is then transformed into the parent's system (before the change), which is then transformed out of the parent's system (after the change). @Cairyn Thanks for the detailed explanation. It works as expected. Apologies. I might have spoken too soon. Sorry to trouble you again, but I'm having problem using the script if the a child has a hierarchical parent. You can see the problem here: hierarchical parent By a successful parenting script, the red and blue spline should match. But as you can see, it does not. You can also check the revised illustration file here: parenting script I have revised the code to include the child's hierarchical parent. But its not working as expected. import c4d from c4d import gui def main(): parent = doc.SearchObject("COG_con") child = doc.SearchObject("wrist_IK_L_con") transformToParentSystem = ~parent.GetMg() transform_To_Pseudo_Parent_System = ~child.GetUpMg() # Rotating the Parent parent[c4d.ID_BASEOBJECT_REL_POSITION,c4d.VECTOR_X] = 10 parent[c4d.ID_BASEOBJECT_REL_ROTATION,c4d.VECTOR_X] = 0.5 transformFromParentSystem = parent.GetMg() transform_From_Pseudo_Parent_System = child.GetUpMg() pseduo_parent_local_space = ~parent.GetMg() * child.GetUpMg() overall_mat = transform_From_Pseudo_Parent_System * transform_To_Pseudo_Parent_System * transformFromParentSystem * transformToParentSystem * child.GetMg() child.SetMg(overall_mat) c4d.EventAdd() if __name__=='__main__': main() @bentraje The script is fine (the original one I gave you). You do not need all the parent Mgs that you inserted because GetMg() is already giving you the global matrix, which includes the object's own local matrix as well as all the parent matrices. Your problem is something completely different: You have a PSR constraint on the child's parent. Switch that off, and my original script works fine. The reason is: (think about it first, as exercise, then return here...) TL;DR the logic in your scene cannot work. I don't know what you want to achieve here so I can't advise any further. Thanks for the response. RE: Switch that off, and my original script works fine. Yes, I forgot to mention the PSR Constraint. Unfortunately, I can't switch it off as it is a part of the rig (i.e. space switching feature). PSR Constraint I also tried the following: RE: transformed twice. That makes sense. Thanks for the rundown. I was able to come up with a solution to bypass/negate it. It seems to work as expected, at least for now in my use case. Wdythink? I'm feeling I'm missing something out. RE: I don't know what you want to achieve I'm creating a script that can mirror/flip a pose. Think of a run cycle that needs basically 8 poses to read as one full cycle. But with the mirror/flip pose I can create only 4 poses and mirror the rest. I have the mirroring done. My problem is only when the mirror plane is rotated. That's why I need to have the "parenting" to happen only once (i..e at script time) since the parenting at run time is handle by various xpresso/PSR/expressions. @bentraje said in Parent Behavior on a "Script" Form: RE: Switch that off, and my original script works fine. Yes, I forgot to mention the PSR Constraint. Unfortunately, I can't switch it off as it is a part of the rig (i.e. space switching feature). I didn't mean "switch it off forever" but "switch it off to see what I mean" I also tried the following: Turn Off PSR Constraint Execute Script. Works as expected Turn On PSR Constraint. And the object is thrown again out of position. Yes, because you still have changed your parent and you still take the parent's child as constraint. The moment you switch the constraint on again, you apply the transformation a second time, this time indirectly through the constraint (or rather, the constraint's target's parent). RE: transformed twice. RE: transformed twice. Here's what I added at the end of the script Yes... if you apply the PSR tag first, by using ExecutePasses, SetMg will take that change to the parent into account already, which cancels out the double transformation. (You don't need the first line btw, setting the child's matrix once will do.) In the context of a script, this is okay to do. Personally, I find it a bit convoluted but I don't feel like digging for another solution. Wdythink? I'm feeling I'm missing something out. Wdythink? I'm feeling I'm missing something out. No, it's okay for what it does. I'd have to spend a few hours going through the usecase to fully understand the mirroring process with active constraints, so, just go ahead with your current approach. RE: I don't know what you want to achieve I'm creating a script that can mirror/flip a pose. RE: I don't know what you want to achieve I'm creating a script that can mirror/flip a pose. Ah. Don't the built-in mirror functions suffice? I thought there were some (but I don't animate so often so maybe I'm just mistaken). Thanks for the confirmation. I'll just continue with the revised script until I find a bug (lol). RE: Ah. Don't the built-in mirror functions suffice? Not really. Correct me if I'm wrong, The built-in mirror tool 1) mirrors only the objects itself and not the mirror plane 2) when it mirrors, it creates additional copy. might be good when you are modelling and rigging, but not on animating. Again, thank you for the detailed responses. Appreciate it alot. Have a great day ahead!
https://plugincafe.maxon.net/topic/12665/parent-behavior-on-a-script-form
CC-MAIN-2021-31
refinedweb
1,449
59.6
Microsoft Visual C++ can compile code very quickly if it is setup properly. There are few basic things to look at: Before we go into the details of each of these items, let's see how we can enable some build time statistics. Please note that all my examples and notes are related to Visual Studio 2008 Professional Edition. In VC++, you can get build time by going into Tools > Options > Projects and Solutions > VC++ Project Settings > Build Timing. Set this value to true. This will print the complete build time to the output window. Enable this before you try to improve the build time, then you will be able to measure the performance correctly. In VC++, you can get the list of files included into each compilation unit (CPP) by enabling "Show Includes" in the project properties by going into C/C++ > Advanced > Show Includes. Set this value to true. This will print the included files with the compiler output. It is often desirable to avoid recompiling a set of header files, especially when they introduce many lines of code and the primary source files that #include them are relatively small. The C++ compiler. #include The precompiled-header options are /Yc (Create Precompiled Header File) and /Yu (Use Precompiled Header File). Use /Yc to create a precompiled header. Select /Yu to use an existing precompiled header in the existing compilation. These settings can be changed by going into Project Properties > Configuration Properties > Precompiled Headers. The idea here is to use one CPP file as the source for creating the pre compiled header and the rest of the CPP files are use the created header file. First, you need a header file that every source file in your project will include. In VC++ this is typically called as "stdafx.h". If you do not have one, create it and also the corresponding CPP file which contains a single line to include the "stdafx.h". Change the rest of the CPP files in your project to include the "stdafx.h" file. This include has to be the first include statement (as the first non-comment thing that they do). If you forget to include your precompiled header file, then you will get the following error message: fatal error C1010: unexpected end of file while looking for precompiled header directive Now, change the project properties to generate / use pre compiled headers. Make sure you select "All Configurations" so that your fixes will affect both debug and release builds of your project. Select the project properties, go to the C/C++ tab, precompiled headers category. Select "Use precompiled header file" and type in the name of your precompiled header file (stdafx.h). Select "stdafx.cpp" properties, go to the C/C++ tab, precompiled headers category. With "All Configurations" selected, and select "Create precompiled header file", and type in the name of your precompiled header file. The precompiled header file should include the big header files which slow down your builds. Possible candidates are STL header files. The rule of thumb is not to include any header files from your project, since if you modify those header files, the whole project will be rebuilt. All set now. Try out the build and gather the build statistic and compare with the original. In the C and C++ programming languages, #pragma once is a non-standard but widely supported preprocessor directive designed to cause the current source file to be included only once in a single compilation. Using #pragma once instead of include guards will typically increase compilation speed since it is a higher-level mechanism; the compiler itself can compare filenames without having to invoke the C preprocessor to scan the header for #ifndef and #endif. #pragma #pragma once #ifndef #endif If the header contains only a guard, when compiling the included file (CPP) the included file has to be fully loaded to scan for #ifdef and #endif statements. To improve the time and to support all compilers, it is recommended to use both header guards and #pragma once statements in the header files. #ifdef #endif For an example, take a header file that you use / include many times form your project. Remove the pragma once from it (if it exists) and compile the project by enabling "Show Includes" discussed previously. Check the number of times the included file gets loaded when the project gets compiled. pragma once Visual Studio 2008 can take advantage of systems that have multiple processors, or multiple-core processors. A separate build process is created for each available processor.. The /MP option can be given to the project settings in the IDE by going into project properties C/C++ > Command Line > Additional Options. The option takes an optional argument to specify the number of processors / cores. If you omit this argument, the compiler retrieves the number of effective processors on your computer from the operating system, and creates a process for each processor. This option does not work with if you have used /Yc (Create Precompiled Header File) or Show Includes. The compiler will warn you, but you can simply ignore it. Otherwise omit the /MP option for /Yc enabled source files (stdafx.cpp) and disable Show Includes option after figuring out the include file fiasco. With this option along if you are running your compiler on a quad core system, you can gain 3 - 4 times compile time improvement. The number of projects that can build concurrently in the IDE is equal to the value of the Maximum number of parallel project builds property. For example, if you build a solution that comprises several projects while this property is set to 2, then up to two projects will build concurrently at a time. To enable this, you need to go into Tools > Options > Projects and Solutions > Build and Run. Set the "maximum number of parallel project to build" to number of processors / cores. 2 This option only allows you to build non-dependent projects in parallel. By doing all of these, we were able to improve the build time of a project by 10 times. Hope this article will help.
http://www.codeproject.com/Articles/304848/How-to-Improve-VCplusplus-Project-Build-Time?fid=1674260&df=90&mpp=10&sort=Position&spc=None&tid=4112813
CC-MAIN-2016-18
refinedweb
1,021
62.07
Linux implements a special virtual filesystem called /proc that stores information about the kernel, kernel data structures, and the state of each process and associated threads. Remember that. Figure 2.23 shows the default output of this command. As would be expected, there is a variety of command-line options for procinfo (check the manual page $ man 8 procinfo for specifics). Additionally, while most of the files in /proc are in a special format, many can be displayed by using the command-line cat utility. [13] [13] Do not be put off by the fact that the majority of the files in /proc show 0 bytes when a long listing is donekeep in mind this is a not a true filesystem. Figure 2.23 Typical procinfo output. linux$ procinfo Linux 2.4.3-12enterprise (root@porky) (gcc 2.96 20000731 ) #1 2CPU [linux] Memory: Total Used Free Shared Buffers Cached Mem: 512928 510436 2492 84 65996 265208 Swap: 1068284 544 1067740 Bootup: Thu Dec 27 12:31:23 2001 Load average: 0.00 0.00 0.00 >1/85 10791 user : 0:12:34.61 0.0% page in : 7194848 nice : 0:00:15.34 0.0% page out: 1714280 system: 0:16:18.81 0.0% swap in : 1 idle : 21d 20:49:43.68 99.9% swap out: 0 uptime: 10d 22:39:26.21 context : 31669318 irq 0: 94556622 timer irq 8: 2 rtc irq 1: 2523 keyboard irq 12: 15009 PS/2 Mouse irq 2: 0 cascade [4] irq 26: 17046596 e100 irq 3: 4 irq 28: 30 aic7xxx irq 4: 6223833 serial irq 29: 30 aic7xxx irq 6: 3 irq 30: 155995 aic7xxx irq 7: 3 irq 31: 918432 aic7xxx In the /proc file system are a variety of data files and subdirectories. A typical /proc file system is shown in Figure 2.24. Figure 2.24 Directory listing of a /proc file system. linux$ ls /proc 1 1083 20706 4 684 9228 dma loadavg stat 1025 1084 20719 494 7 9229 driver locks swaps 1030 1085 20796 499 704 9230 execdomains mdstat sys 10457 1086 20797 5 718 9231 fb meminfo sysvipc 10458 19947 20809 511 752 9232 filesystems misc tty 10459 2 3 526 758 9233 fs modules uptime 1057 20268 32463 6 759 9234 ide mounts version 10717 20547 32464 641 765 9235 interrupts mtrr 10720 20638 32466 653 778 9236 iomem net 10721 20652 32468 655 780 997 ioports partitions 10725 20680 32469 656 795 bus irq pci 10726 20695 32471 657 807 cmdline kcore scsi 10731 20696 32473 658 907 cpuinfo kmsg self 10736 20704 32474 669 9227 devices ksyms slabinfo Numeric entries, such as 1 or 1025, are process subdirectories for existing processes and contain information specific to the process. Nonnumeric entries, excluding the self entry, have kernel-related information. At this point, a full presentation of the kernel- related entries in /proc would be a bit premature, as many of them reflect constructs (such as shared memory) that are covered in detail in later chapters of the text. The remaining discussion focuses on the process-related entries in /proc . The /proc/self file is a pointer (symbolic link) to the ID of the current process. Program 2.10 uses the system call readlink (see Table 2.25) to obtain the current process ID from / proc/self . Program 2.10 Reading the /proc/self file. File : p2.10.cxx /* Determining Process ID by reading the contents of the symbolic link /proc/self */ + #define _GNU_SOURCE #include #include #include #include 10 using namespace std; const int size = 20; int main( ){ pid_t proc_PID, get_PID; + char buffer[size]; get_PID = getpid( ); readlink("/proc/self", buffer, size); proc_PID = atoi(buffer); cout << "getpid : " << get_PID << endl; 20 cout << "/proc/self : " << proc_PID << endl; return 0; } Table 2.25. Summary of the readlink System Call. The readlink system call reads the symbolic link referenced by path and stores this data in the location referenced by buf . The bufsiz argument specifies the number of characters to be processed and is most often set to be the size of the location referenced by the buf argument. The readlink system call does not append a null character to its input. If this system call fails, it returns a 1 and sets errno ; otherwise , it returns the number of characters read. In the case of error the values that errno can take on are listed in Table 2.26. A wide array of data on each process is kept by the operating system. This data is found in the /proc directory in a decimal number subdirectory named for the process's ID. Each process subdirectory includes Table 2.26. readlink Error Messages. As noted, the cmdline file has the argument list for the process. This same data is passed to the function main as argv . The data is stored as a single character string with a null character separating each entry. On the command line, the tr utility can be used to translate the null characters into newlines to make the contents of the file easier to read. For example, the command-line sequence linux$ cat /proc/cmdline tr " linux$ cat /proc/cmdline tr "
https://flylib.com/books/en/1.23.1/the_proc_filesystem.html
CC-MAIN-2021-21
refinedweb
861
68.6
On Wed, 6 May 2015 14:06:16 -0300 Eduardo Habkost <address@hidden> wrote: > On Wed, May 06, 2015 at 06:23:05PM +0200, Michael Mueller wrote: > > On Wed, 6 May 2015 08:23:32 -0300 > > Eduardo Habkost <address@hidden> wrote: > [...] > > > > > > cpudef_init(); > > > > > > > > > > > > if (cpu_model && cpu_desc_avail() && is_help_option(cpu_model)) { > > > > > > list_cpus(stdout, &fprintf, cpu_model); > > > > > > exit(0); > > > > > > } > > > > > > > > > > > > That is because the output does not solely depend on static > > > > > > definitions > > > > > > but also on runtime context. Here the host machine type this > > > > > > instance of > > > > > > QEMU is running on, at least for the KVM case. > > > > > > > > > > Is this a required feature? I would prefer to have the main() code > > > > > simple even if it means not having runnable information in "-cpu ?" by > > > > > now (about possible ways to implement this without cpu_desc_avail(), > > > > > see > > > > > below). > > > > > > > > I think it is more than a desired feature because one might end up with > > > > a failed > > > > CPU object instantiation although the help screen claims to CPU model > > > > to be valid. > > > > > > I think you are more likely to confuse users by not showing information > > > on "-cpu ?" when -machine is not present. I believe most people use > > > "-cpu ?" with no other arguments, to see what the QEMU binary is capable > > > of. > > > > I don't disagree with that, both cases are to some extend confusing... > > But the accelerator makes a big difference and a tended user should really > > be aware > > of that. > > > > Also that TCG is the default: > > > > $ ./s390x-softmmu/qemu-system-s390x -cpu ? > > s390 host > > > > And I don't see a way to make a user belief that all the defined CPU models > > are available to > > a TCG user in the S390 case where most of the CPU facilities are not > > implemented. > > Well, we could simply add a "KVM required" note (maybe just an asterisk beside > the CPU model description). But maybe we have a reasonable alternative below: > > > > > > > > > Anyway, whatever we decide to do, I believe we should start with > > > something simple to get things working, and after that we can look for > > > ways improve the help output with "runnable" info. > > > > I don't see how to solve this without cpu_desc_avail() or some other > > comparable mechanism, the > > aliases e.g. are also dynamic... > > What bothers me in cpu_desc_avail() is that it depends on global state that is > non-trivial (one needs to follow the whole KVM initialization path to find out > if cpu_desc_avail() will be true or false). > > We could instead simply skip the cpu_list() call unconditionally on s390. > e.g.: > > target-s390x/cpu.h: > /* Delete the existing cpu_list macro */ > > cpus.c: > int list_cpus(FILE *f, fprintf_function cpu_fprintf, const char *optarg) > { > #if defined(cpu_list) > cpu_list(f, cpu_fprintf); > return 1; > #else > return 0; > #endif > } > > vl.c: > if (cpu_model && is_help_option(cpu_model)) { > /* zero list_cpus() return value means "-cpu ?" will be > * handled later by machine initialization code */ > if (list_cpus(stdout, &fprintf, cpu_model)) { > exit(0); > } > } That approach is will do the job as well. I will prepare a patch for the next version. Thanks! > > [...] > > > > > > About "-cpu ?": do we really want it to depend on -machine processing? > > > Today, help output shows what the QEMU binary is capable of, not just > > > what the host system and -machine option are capable of. > > > > I think we have to take it into account because the available CPU models > > might > > deviate substantially as in the case for S390 for KVM and TCG. > > That's true, on s390 the set of available CPU models is very different on both > cases. It breaks assumptions in the existing "-cpu ?" handling code in main(). > > > > > > > > > If we decide to change that assumption, let's do it in a generic way and > > > not as a arch-specific hack. The options I see are: > > > > welcome > > > > > > > > 1) Continue with the current policy where "-cpu ?" does not depend on > > > -machine arguments, and show all CPU models on "-cpu ?". > > > 2) Deciding that, yes, it is OK to make "-cpu ?" depend on -machine > > > arguments, and move the list_cpus() call after machine initialization > > > inside generic main() code for all arches. > > > 2.1) We could delay the list_cpus() call inside main() on all cases. > > > 2.2) We could delay the list_cpus() call inside main() only if > > > an explicit -machine option is present. > > > > > > I prefer (1) and my second choice would be (2.2), but the main point is > > > that none of the options above require making s390 special and > > > introducing cpu_desc_avail(). > > > > My take here is 2.1 because omitting option -machine is a decision to some > > defaults for machine type and accelerator type already. > > The problem with 2.1 is that some machine init functions require that > additional command-line parameters are set and will abort (e.g. mips > machines). > So we can't do that unconditionally for all architectures. > > The proposal above is like 2.1, but conditional: it will delay "-cpu ?" > handling only on architectures that don't define cpu_list(). perfect. Michael >
http://lists.gnu.org/archive/html/qemu-devel/2015-05/msg01086.html
CC-MAIN-2018-05
refinedweb
793
64.41
#include "Coin_C_defines.h" Include dependency graph for Cbc_C_Interface.h: Go to the source code of this file. Version. Default Cbc_Model constructor. Cbc_Model Destructor. Read an mps file from the given filename. Write an mps file from the given filename. Integer information. Copy in integer information. Drop integer informations. Resizes rim part of model. Deletes rows. Add rows. Deletes columns. Add columns. Drops names - makes lengthnames 0 and names empty. Copies in names. Number of rows. Number of columns. Primal tolerance to use. Number of rows. Dual tolerance to use. Number of rows. Number of rows. Number of rows. Dual objective limit. Number of rows. Number of rows. Fills in array with problem name. Sets problem name. Must have at end. Number of iterations. Number of rows. Maximum number of iterations. Number of rows. Maximum number of nodes. Number of rows. Number of rows. Number of rows. Maximum time in seconds (from when set called). Number of rows. Returns true if hit maximum iterations (or time). Status of problem: 0 - optimal 1 - primal infeasible 2 - dual infeasible 3 - stopped on iterations etc 4 - stopped due to errors. Set problem status. Secondary status of problem - may get extended 0 - none 1 - primal infeasible because dual limit reached 2 - scaled problem optimal - unscaled has primal infeasibilities 3 - scaled problem optimal - unscaled has dual infeasibilities 4 - scaled problem optimal - unscaled has both dual and primal infeasibilities. Number of rows. Direction of optimization (1 - minimize, -1 - maximize, 0 - ignore. Number of rows. Primal row solution. Primal column solution. Dual row solution. Reduced costs. Row lower. Row upper. Column Lower. Column Upper. Number of elements in matrix. Column starts in matrix. Row indices in matrix. Column vector lengths in matrix. Element values in matrix. Infeasibility/unbounded ray (NULL returned if none/wrong) Up to user to use delete [] on these arrays. Number of rows. See if status array exists (partly for OsiClp). Return address of status array (char[numberRows+numberColumns]). Copy in status vector. User pointer for whatever reason. Number of rows. Unset Callback function. Amount of print out: 0 - none 1 - just final 2 - just factorizations 3 - as 2 plus a bit more 4 - verbose above that 8,16,32 etc just for selective debug. length of names (0 means no names0 Fill in array (at least lengthNames+1 long) with a row name. Fill in array (at least lengthNames+1 long) with a column name. Sets or unsets scaling, 0 -off, 1 equilibrium, 2 geometric, 3, auto, 4 dynamic(later). Gets scalingFlag. Crash - at present just aimed at dual, returns -2 if dual preferred and crash basis created -1 if dual preferred and all slack basis preferred 0 if basis going in was not all slack 1 if primal preferred and all slack basis preferred 2 if primal preferred and crash basis created. if gap between bounds <="gap" variables can be flipped If "pivot" is 0 No pivoting (so will just be choice of algorithm) 1 Simple pivoting e.g. gub 2 Mini iterations If problem is primal feasible. If problem is dual feasible. Dual bound. If problem is primal feasible. Infeasibility cost. If problem is primal feasible. Perturbation: 50 - switch on perturbation 100 - auto perturb if takes too long (1.0e-6 largest nonzero) 101 - we are perturbed 102 - don't try perturbing again default is 100 others are for playing. If problem is primal feasible. Current (or last) algorithm. Set algorithm. Sum of dual infeasibilities. Number of dual infeasibilities. Sum of primal infeasibilities. Number of primal infeasibilities. Save model to file, returns 0 if success. This is designed for use outside algorithms so does not save iterating arrays etc. It does not save any messaging information. Does not save scaling values. It does not know about all types of virtual functions. Restore model from file, returns 0 if success, deletes current model. Just check solution (for external use) - sets sum of infeasibilities etc. Number of rows. Number of columns. Number of iterations. Are there a numerical difficulties? Is optimality proven? Is primal infeasiblity proven? Is dual infeasiblity proven? Is the given primal objective limit reached? Is the given dual objective limit reached? Iteration limit reached? Direction of optimization (1 - minimize, -1 - maximize, 0 - ignore. Primal row solution. Primal column solution. Number of rows. Dual row solution. Reduced costs. Row lower. Row upper. Column Lower. Column Upper. Print the model. Determine whether the variable at location i is integer restricted. Return CPU time. Number of nodes explored in B&B tree. Return a copy of this model. Set this the variable to be continuous. Add SOS constraints to the model using dense matrix. Add SOS constraints to the model using row-order matrix. Delete all object information. Print the solution. Dual initial solve. Primal initial solve. Dual algorithm - see ClpSimplexDual.hpp for method. Primal algorithm - see ClpSimplexPrimal.hpp for method.
http://www.coin-or.org/Doxygen/CoinAll/_cbc___c___interface_8h.html
crawl-003
refinedweb
807
55.91
Server Rendering with React and React Router Tyler McGinnis Mar 12. Video and. huge. import React, { Component } from 'react' class App extends Component { render() { return ( <div> Hello World </div> ) } } export default App., when you visit localhost:3000 you should see "Hello World". That "Hello World" was initially rendered on the server, then when it got to the client and the bundle.js file loaded, React took over. Cool. Also, anticlimactic. Let's mix things up a big so we can really see how this works. What if instead of rendering "Hello World", we wanted App to render Hello {this.props.data}. That's a simple enough change inside of App.js class App extends Component { render() { return ( <div> Hello {this = renderToString( <App data={data} /> ) res.send(` <!DOCTYPE html> <html> <head> <title>SSR with RR</title> <script src="/bundle.js" defer></script> . Try it out in your browser. Head to localhost:3000/popular/javascript. You'll notice that the most popular JavaScript repos are being requested. You can change the language to any langauge browser/index.js since that's where we're rendering App. import React from 'react' import { hydrate } from 'react-dom' import App from '../shared/App' import { BrowserRouter } from 'react-router-dom' = 👌👌👌. I promise you we're so close.. One. More. Problem.. This was originally published at TylerMcGinnis.com and is part of their React Router course. What are common myths about software careers? Whether they were once true and now outdated, or were never a thing: What are c... These posts that detail the setup are great, especially with constantly changing API's like webpack version 4 and react-dom's hydrate versus render. I really love getting server-side rendering to work, it's just so magical to be able to use the app's component code on both the client and server side. And it's great for sending over pre-rendered markup, which eliminates much of the initial wait time while the app is fetching data. However, it's important to note that the performance gain is mostly client-side. Enabling server side rendering creates much more work for the server. The server becomes responsible (at least for the first load) to fetch the data to hydrate the initial state in addition to rendering the markup. Performance and complexity is even more pronounced for web apps with non-Node.js backends. Most server-side frameworks have packages that let you render JavaScript on the backend, but they are typically bindings to the v8 JavaScript engine, which is a significant overhead to consider. It would be interesting to see a post that describes a good strategy for splitting the routes in a react-router-powered React app that need rendering and those routes that do not (like public article/posts that want to be indexed versus a private profile administration page behind an auth wall). Agree. Great comment. Great :D
https://dev.to/tylermcginnis/server-rendering-with-react-and-react-router--48i2
CC-MAIN-2018-39
refinedweb
485
65.52
Weather station based on ESP32 and MicroPython In one of the previous posts I briefly described sending data to Google Sheets from a ESP32 board using MicroPython. As I mentioned earlier, the code is available on GitHub. Here are the main features: - Measuring temperature and humidity with DHT22 sensor. - Sending data to a Google Sheet. - Authentication via Google OAuth 2.0 service to get access to the sheet. - Configuring the device via web browser. The Google Sheet doesn’t need to be publicly available on the Internet. The device doesn’t require any middleman such as PushingBox or IFTTT. In this post, let’s focus a bit on technical details. Hardware components The device is based on a ESP32 board. For the sake of simplicity, let’s use a ESP32 development board and a mini breadboard. Here is a full list of components: - ESP32 development board - Breadboard - DHT22 sensor - Switch - Resistor 10 KOhm - Resistor 22 KOhm We’ll also need a USB cable for communicating to the board, and also some wires. Normally, most of ESP32 development boards doesn’t require an additional power supply. It can be just powered by USB. That’s pretty much it. Circuit The circuit is pretty simple. We can see that the DHT22 sensor is connected to the D23 pin of the ESP32 board. R3 is a pull-up resistor. Note that your DHT32 sensor may already have a pull-up resistor, so that you don’t need to add the resistor R3 (for example, DHT22 sensors from most of sensor kits already have it). The switch S1 is used to turn on the configuration mode. In this mode, the device sets up a Wi-Fi access point, and starts a web server which offers to update the device’s configuration. Here is a breadboard view: Project structure The code repository can be cloned as usual with the git clone command below. Note that you'll need to checkout the version 1.0.0 because the master branch contains further updates and improvements that are not covered in this post (feel free to check them out as well!). git clone cd esp32-weather-google-sheets git checkout 1.0.0 The source code is located in the src directory: src/config.pycontains a Configclass which holds the device's configuration. src/main.pyis the main script which runs on the device when it starts. src/ntp.pycontains an NTP client which is based on ntptime.py src/settings.pyimplements the configuration mode, it contains a web form for updating the settings, and a handler for HTTP requests. src/util.pyprovides several useful functions for rebooting, starting a Wi-Fi access point and so on. src/weather.pyis responsible for measuring temperature and humidity with DHT22 sensor. src/http/server.pyprovides a simple HTTP server. src/google/auth.pyimplement authentication with Google OAuth2 service. src/google/sheets.pyprovides a Spreadsheetclass which allows inserting rows in to a Google sheet. src/rsadirectory contains an implementation of RSA signing whis is based on the python-rsa package. Besides the source code, the project also contains for following: esp32-20190529-v1.11.binis a MicroPython 1.11 firmware. Newer or older versions may also work. main.confholds the device's configuration which may be edited before uploading the file to the device, or later via the configuration mode. scriptsdirectory provides a number of helpful scripts for flashing the firmware, uploading code and so on. We’ll also need the following tools: - esptool for deploying MicroPython to our ESP32 development board - mpfshell for uploading files to the board. - minicom for connecting to the board for debugging purposes. - openssl and python-rsa package for converting cryptographic keys. - Python3 for running esptool and mpfshell. The instructions below have been only tested on Ubuntu 16.10. Google authentication with OAuth2 To write data to a Google sheet, the project calls Google Sheets API. To make a successful call to the API, we need an OAuth2 token provided by Google OAuth2 service. Let’s discuss how it can be done. First, we need to create a project and a service account in Google IAM console. Follow these instructions from Google. Once it’s done you’re going to have an email for your service account like your-service-account@your-project.iam.gserviceaccount.com. Then, we need to create a key for the service account. Download the key and save it to google_key.json file in the root of the project. The downloaded key is an RSA private key. The key is used by a ServiceAccount class from src/google/auth.py to obtain an OAuth2 token from Google OAuth2 service. The ServiceAccount.token() method calls JWTBuilder.build() which builds a JWT and signs it with the private key. The JWTBuilder class needs current time to put it to the JWT. The class uses an NTP client to get current time: def build(self): time = ntp.time() print('jwt: time: %d' % time) self._claim['iat'] = time self._claim['exp'] = time + self._expiration encoded_header = encode_dict_to_base64(self._header) encoded_claim = encode_dict_to_base64(self._claim) to_be_signed = '%s.%s' % (encoded_header, encoded_claim) signature = pkcs1.sign(to_be_signed.encode('utf8'), self._key, 'SHA-256') encoded_signature = encode_bytes_to_safe_base64(signature) return '%s.%s' % (to_be_signed, encoded_signature) The ServiceAccount class then sends the JWT to the Google OAuth2 Authorization Server which returns a token for accessing Sheets API. def token(self): builder = JWTBuilder() builder.service_account(self._email) builder.scope(self._scope) builder.key(self._key) jwt = builder.build() type = 'urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer' body = 'grant_type=%s&assertion=%s' % (type, jwt) headers = {'Content-Type': 'application/x-www-form-urlencoded'} response = requests.post('', data=body, headers=headers) if not response: print('token: no response received')return response.json()['access_token'] The JWTBuilder class uses implementation of RSA algorithm from src/rsa directory which contains a truncated version of the python-rsa package adapted for MicroPython for ESP32. This implementation supports only RSA signing. Unfortunately it doesn't even support loading RSA keys from PKCS1 since it would require porting the pyasn1 package to ESP32 which may be too big for tiny ESP32 microcontrollers. Instead, the truncated RSA implementation can load a private key from a JSON file which contains q, e, d, p and n numbers. Therefore, the downloaded keys need to be converted converted to the format which is recognized by the truncated RSA implementation. It can be done with scripts/extract_key.sh script: cd scripts sh extract_key.sh ../google_key.json ../key.json The key.json file needs to be put to the device which is described below. The original key and key.json file should be kept in secret. Creating a Google sheet It’s time to create and configure a Google sheet which we’ll use to store temperature and humidity. Simply create a new sheet, and then share it with the service account your-service-account@your-project.iam.gserviceaccount.com which was created earlier. The sheet doesn't need to be publicly available on the Internet. In the near future, we'll need an ID of the created sheet. The ID looks like a hash, and can be found in the URL of the sheet:{sheet_id}/edit#gid=0 Calling Google Sheets API src/google/sheet.py contains a Spreadsheet class which calls the following Sheets API endpoint to insert a row to the specified Google sheet:{sheet_id}/values/${range}:append?insertDataOption=INSERT_ROWS&valueInputOption=RAW A row contains three values: - Column Acontains a current timestamp which is obtained from an NTP server. - Column Bcontains temperature. - Column Ccontains humidity. The class is configured with an instance of ServiceAccount which provides an OAuth2 token for accessing the Google Sheets API. Configuring the device main.conf file holds a configuration for the device. It's just a JSON file which looks like the following: { "error_handling": "ignore", "ssid": "", "config_mode_switch_pin": 22, "dht22_pin": 23, "measurement_interval": "1h", "google_service_account_email": "", "google_sheet_id": "" } The configuration is loaded by Config class from src/config.py. Let's discuss what the parameters above mean. error_handling defines how the device handles errors which may occur in the main loop in the end of src/main.py. The parameter can be set to ignore, reboot or stop. If error_handling is set to reboot, then the device is going to reboot if an exception was thrown. If the parameter is set to stop, then the device is going to exit from the main loop and stop. If the parameter is set to ignore, then all exceptions are going to be ignored and the main loop should never stop. Exceptions may occur due to various reasons. For example, networking issues, some intermittent problems in our code or MicroPython libraries. The ignore value sounds like the best choice if we want the device to be fault tolerant. The default value for error_handling parameter is ignore. The ssid and password parameter contains SSID and password for a Wi-Fi network. The config_mode_switch_pin and dht22_pin parameters contains pin numbers which are used to connect the DHT22 sensor and the switch S1. The measurement_interval interval parameter defines how often the device should measure temperature and humidity. The format is Xh Ym Zs. For example, 1h 2m 3s means one hour two minutes and three seconds. The google_service_account_email and google_sheet_id defines a services account email and an ID of a spreadsheet. There are two ways to configure the device. The first way is to edit the main.conf file before uploading it to the device which is discussed below. The second way is to use the configuration mode. Once the code is uploaded to the device, we can press the S1 switch which reboots the device and turn on the configuration mode. In this mode, the devices starts a Wi-Fi access point which is called esp32-weather-google-sheets. The password is helloesp32. The devices then runs an HTTP server on which provides a web form to set the parameters described above. Bringing all together The main.py brings all the components together. First, it loads configuration from main.conf. Next, it creates an instance of ServiceAccount class which is going to be used to obtain an OAuth2 token for writing data to a sheet. Then, it creates an instance of Spreadsheet which is used to write data to the sheet. And finally, it initializes the DHT22 sensor. sa = ServiceAccount() sa.email(config.get('google_service_account_email')) sa.scope('') sa.private_rsa_key(config.private_rsa_key())spreadsheet = Spreadsheet() spreadsheet.set_service_account(sa) spreadsheet.set_id(config.get('google_sheet_id')) spreadsheet.set_range('A:A')weather = Weather(config.get('dht22_pin'), config.get('measurement_interval'), weather_handler) Then, main.py creates a handler which takes temperature and humidity and write them to a sheet. class WeatherHandler:def __init__(self, spreadsheet): self.spreadsheet = spreadsheetdef handle(self, t, h): print('temperature = %.2f' % t) print('humidity = %.2f' % h) spreadsheet.append_values([t, h])weather_handler = WeatherHandler(spreadsheet) Next, it initializes a switch which turns on the configuration mode. When the switch changes its state, the board reboots immediately and starts an access point with an HTTP server. If the configuration mode is disable, the devices tries to connect to Wi-Fi. config_mode_switch = Pin(config.get('config_mode_switch_pin'), Pin.IN) config_mode_switch.irq(lambda pin: util.reboot())if config_mode_switch.value() == 1: from http.server import HttpServer from settings import ConnectionHandler print('enabled configuration mode') access_point = util.start_access_point(ACCESS_POINT_SSID, ACCESS_POINT_PASSWORD) handler = ConnectionHandler(config) ip = access_point.ifconfig()[0] HttpServer(ip, 80, handler).start() util.reboot()util.connect_to_wifi(config.get('ssid'), config.get('password')) In the end, the main.py starts the main loop which calls checks if it's time to measure temperature and humidity, and handles exceptions according to the specified error handling strategy. while True: try: weather.check() except: if config.get('error_handling') == 'reboot': print('achtung! something wrong happened! rebooting ...') util.reboot() elif config.get('error_handling') == 'stop': raise else: print('achtung! something wrong happened! but ignoring ...')time.sleep(1) # in seconds Deploying MicroPython to ESP32 First, we need to erase the entire flash of the device before deploying new MicroPython firmware. It can be done with a command like the following, or just run script/erase.sh: esptool.py --chip esp32 --port /dev/ttyUSB0 erase_flash The command above is going to print out something… Erasing flash (this may take a while)… Chip erase completed successfully in 7.4s Hard resetting via RTS pin… Note that before erasing, flashing or verifying firmware, the board may need to be switched to the flash mode. It can be normally done by pressing the RST button while holding the EN button. However, some ESP32 devices allow deploying firmware without pressing any button on the board which is quite convenient. esptool may require root access, so you may need to run the command above with sudo. Or, you can add your user to dialout group. The details can be found here. Then, we can deploy MicroPython firmware to our ESP32 board. Run the a command like below, or just call scripts/flash.sh: esptool.py \ --chip esp32 \ --port /dev/ttyUSB0 \ --baud 460800 \ write_flash -z 0x1000 esp32-20190529-v1.11.bin In case of successful deployment, it’s going to print out something like that: Compressed 1146864 bytes to 717504… Wrote 1146864 bytes (717504 compressed) at 0x00001000 in 16.6 seconds (effective 552.7 kbit/s)… Hash of data verified. Leaving… Hard resetting via RTS pin… To make sure everything went fine, the deployed firmware can be verified with the following command which is also available in script/verify.sh: esptool.py \ --chip esp32 \ --port /dev/ttyUSB0 \ --baud 460800 \ verify_flash 0x1000 esp32-20190529-v1.11.bin If the verification succeeds, the output is going to look Verifying 0x117ff0 (1146864) bytes @ 0x00001000 in flash against esp32-20190529-v1.11.bin… -- verify OK (digest matched) Hard resetting via RTS pin… Uploading code to ESP32 Once we have MicroPython on our ESP32 board, we can upload the code. There are several ways to do that, but we’re going to use mpfshell. The following command uploads the code from src directory, the private RSA key and main.conf to the board: mpfshell \ -n -c \ "open ttyUSB0; put main.conf; put key.json; \ lcd src; mput ..py; \ md rsa; cd rsa; lcd rsa; mput ..py; \ cd ..; md google; cd google; lcd ..; lcd google; mput .*.py" It can also be done by calling scripts/upload.sh. If everything goes well, mpfshell is going to print something like the following: Connected to esp32 put util.py put weather.py put ntp.py put config.py put settings.py put main.py put machine_size.py put __init__.py put transform.py put _compat.py put core.py put pkcs1.py put common.py put key.py put __init__.py put client.py put core.py put server.py put auth.py put __init__.py put sheet.py Conclusion In this project, Google Sheets are used as a simple database to store temperature and humidity. The data can be easily accessed and shared with others, furthermore we can draw charts and perform basic data analysis using features of Google Spreadsheets. This is a pretty simple weather station, and there is room for improvements. The project uses only two pins of ESP32 but the chip provides many more which can be used to add more features to the device. For example, here is a couple of sensors which may extend this simple weather station: Other pins can be used to control other devices. For example, the weather station can open a window and turn on a ventilator when CO2 level is high, or it can turn on an AC when it’s getting too hot. References Originally published at on July 21, 2019.
https://artem-smotrakov.medium.com/weather-station-based-on-esp32-and-micropython-7df014ca2b83
CC-MAIN-2021-17
refinedweb
2,595
60.51
namespace wraps a global system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the resource. Namespaces are used for a variety of purposes, with the most notable being the implementation of containers, a technique for lightweight virtualization. This is the second part in a series of articles that looks in some detail at namespaces and the namespaces API. The first article in this series provided an overview of namespaces. This article looks at the namespaces API in some detail and shows the API in action in a number of example programs. The namespace API consists of three system calls—clone(), unshare(), and setns()—and a number of /proc files. In this article, we'll look at all of these system calls and some of the /proc files. In order to specify a namespace type on which to operate, the three system calls make use of the CLONE_NEW* constants listed in the previous article: CLONE_NEWIPC, CLONE_NEWNS, CLONE_NEWNET, CLONE_NEWPID, CLONE_NEWUSER, and CLONE_NEWUTS. One way of creating a namespace is via the use of clone(), a system call that creates a new process. For our purposes, clone() has the following prototype: int clone(int (*child_func)(void *), void *child_stack, int flags, void *arg); Essentially, clone() is a more general version of the traditional UNIX fork() system call whose functionality can be controlled via the flags argument. In all, there are more than twenty different CLONE_* flags that control various aspects of the operation of clone(), including whether the parent and child process share resources such as virtual memory, open file descriptors, and signal dispositions. If one of the CLONE_NEW* bits is specified in the call, then a new namespace of the corresponding type is created, and the new process is made a member of that namespace; multiple CLONE_NEW* bits can be specified in flags. Our example program (demo_uts_namespace.c) uses clone() with the CLONE_NEWUTS flag to create a UTS namespace. As we saw last week, UTS namespaces isolate two system identifiers—the hostname and the NIS domain name—that are set using the sethostname() and setdomainname() system calls and returned by the uname() system call. You can find the full source of the program here. Below, we'll focus on just some of the key pieces of the program (and for brevity, we'll omit the error checking code that is present in the full version of the program). The example program takes one command-line argument. When run, it creates a child that executes in a new UTS namespace. Inside that namespace, the child changes the hostname to the string given as the program's command-line argument. The first significant piece of the main program is the clone() call that creates the child process: child_pid = clone(childFunc, child_stack + STACK_SIZE, /* Points to start of downwardly growing stack */ CLONE_NEWUTS | SIGCHLD, argv[1]); printf("PID of child created by clone() is %ld\n", (long) child_pid); The new child will begin execution in the user-defined function childFunc(); that function will receive the final clone() argument (argv[1]) as its argument. Since CLONE_NEWUTS is specified as part of the flags argument, the child will execute in a newly created UTS namespace. The main program then sleeps for a moment. This is a (crude) way of giving the child time to change the hostname in its UTS namespace. The program then uses uname() to retrieve the host name in the parent's UTS namespace, and displays that hostname: sleep(1); /* Give child time to change its hostname */ uname(&uts); printf("uts.nodename in parent: %s\n", uts.nodename); Meanwhile, the childFunc() function executed by the child created by clone() first changes the hostname to the value supplied in its argument, and then retrieves and displays the modified hostname: sethostname(arg, strlen(arg); uname(&uts); printf("uts.nodename in child: %s\n", uts.nodename); Before terminating, the child sleeps for a while. This has the effect of keeping the child's UTS namespace open, and gives us a chance to conduct some of the experiments that we show later. Running the program demonstrates that the parent and child processes have independent UTS namespaces: $ su # Need privilege to create a UTS namespace # uname -n antero # ./demo_uts_namespaces bizarro PID of child created by clone() is 27514 uts.nodename in child: bizarro uts.nodename in parent: antero As with most other namespaces (user namespaces are the exception), creating a UTS namespace requires privilege (specifically, CAP_SYS_ADMIN). This is necessary to avoid scenarios where set-user-ID applications could be fooled into doing the wrong thing because the system has an unexpected hostname. Another possibility is that a set-user-ID application might be using the hostname as part of the name of a lock file. If an unprivileged user could run the application in a UTS namespace with an arbitrary hostname, this would open the application to various attacks. Most simply, this would nullify the effect of the lock file, triggering misbehavior in instances of the application that run in different UTS namespaces. Alternatively, a malicious user could run a set-user-ID application in a UTS namespace with a hostname that causes creation of the lock file to overwrite an important file. (Hostname strings can contain arbitrary characters, including slashes.) Each process has a /proc/PID/ns directory that contains one file for each type of namespace. Starting in Linux 3.8, each of these files is a special symbolic link that provides a kind of handle for performing certain operations on the associated namespace for the process. $ ls -l /proc/$$/ns # $$ is replaced by shell's PID total 0 lrwxrwxrwx. 1 mtk mtk 0 Jan 8 04:12 ipc -> ipc:[4026531839] lrwxrwxrwx. 1 mtk mtk 0 Jan 8 04:12 mnt -> mnt:[4026531840] lrwxrwxrwx. 1 mtk mtk 0 Jan 8 04:12 net -> net:[4026531956] lrwxrwxrwx. 1 mtk mtk 0 Jan 8 04:12 pid -> pid:[4026531836] lrwxrwxrwx. 1 mtk mtk 0 Jan 8 04:12 user -> user:[4026531837] lrwxrwxrwx. 1 mtk mtk 0 Jan 8 04:12 uts -> uts:[4026531838] One use of these symbolic links is to discover whether two processes are in the same namespace. The kernel does some magic to ensure that if two processes are in the same namespace, then the inode numbers reported for the corresponding symbolic links in /proc/PID/ns will be the same. The inode numbers can be obtained using the stat() system call (in the st_ino field of the returned structure). However, the kernel also constructs each of the /proc/PID/ns symbolic links so that it points to a name consisting of a string that identifies the namespace type, followed by the inode number. We can examine this name using either the ls -l or the readlink command. Let's return to the shell session above where we ran the demo_uts_namespaces program. Looking at the /proc/PID/ns symbolic links for the parent and child process provides an alternative method of checking whether the two processes are in the same or different UTS namespaces: ^Z # Stop parent and child [1]+ Stopped ./demo_uts_namespaces bizarro # jobs -l # Show PID of parent process [1]+ 27513 Stopped ./demo_uts_namespaces bizarro # readlink /proc/27513/ns/uts # Show parent UTS namespace uts:[4026531838] # readlink /proc/27514/ns/uts # Show child UTS namespace uts:[4026532338] As can be seen, the content of the /proc/PID/ns/uts symbolic links differs, indicating that the two processes are in different UTS namespaces. The /proc/PID/ns symbolic links also serve other purposes. If we open one of these files, then the namespace will continue to exist as long as the file descriptor remains open, even if all processes in the namespace terminate. The same effect can also be obtained by bind mounting one of the symbolic links to another location in the file system: # touch ~/uts # Create mount point # mount --bind /proc/27514/ns/uts ~/uts Before Linux 3.8, the files in /proc/PID/ns were hard links rather than special symbolic links of the form described above. In addition, only the ipc, net, and uts files were present. Keeping a namespace open when it contains no processes is of course only useful if we intend to later add processes to it. That is the task of the setns() system call, which allows the calling process to join an existing namespace: int setns(int fd, int nstype); More precisely, setns() disassociates the calling process from one instance of a particular namespace type and reassociates the process with another instance of the same namespace type. The fd argument specifies the namespace to join; it is a file descriptor that refers to one of the symbolic links in a /proc/PID/ns directory. That file descriptor can be obtained either by opening one of those symbolic links directly or by opening a file that was bind mounted to one of the links. The nstype argument allows the caller to check the type of namespace that fd refers to. If this argument is specified as zero, no check is performed. This can be useful if the caller already knows the namespace type, or does not care about the type. The example program that we discuss in a moment (ns_exec.c) falls into the latter category: it is designed to work with any namespace type. Specifying nstype instead as one of the CLONE_NEW* constants causes the kernel to verify that fd is a file descriptor for the corresponding namespace type. This can be useful if, for example, the caller was passed the file descriptor via a UNIX domain socket and needs to verify what type of namespace it refers to. Using setns() and execve() (or one of the other exec() functions) allows us to construct a simple but useful tool: a program that joins a specified namespace and then executes a command in that namespace. Our program (ns_exec.c, whose full source can be found here) takes two or more command-line arguments. The first argument is the pathname of a /proc/PID/ns/* symbolic link (or a file that is bind mounted to one of those symbolic links). The remaining arguments are the name of a program to be executed inside the namespace that corresponds to that symbolic link and optional command-line arguments to be given to that program. The key steps in the program are the following: fd = open(argv[1], O_RDONLY); /* Get descriptor for namespace */ setns(fd, 0); /* Join that namespace */ execvp(argv[2], &argv[2]); /* Execute a command in namespace */ An interesting program to execute inside a namespace is, of course, a shell. We can use the bind mount for the UTS namespace that we created earlier in conjunction with the ns_exec program to execute a shell in the new UTS namespace created by our invocation of demo_uts_namespaces: # ./ns_exec ~/uts /bin/bash # ~/uts is bound to /proc/27514/ns/uts My PID is: 28788 We can then verify that the shell is in the same UTS namespace as the child process created by demo_uts_namespaces, both by inspecting the hostname and by comparing the inode numbers of the /proc/PID/ns/uts files: # hostname bizarro # readlink /proc/27514/ns/uts uts:[4026532338] # readlink /proc/$$/ns/uts # $$ is replaced by shell's PID uts:[4026532338] In earlier kernel versions, it was not possible to use setns() to join mount, PID, and user namespaces, but, starting with Linux 3.8, setns() now supports joining all namespace types. The final system call in the namespaces API is unshare(): int unshare(int flags); The unshare() system call provides functionality similar to clone(), but operates on the calling process: it creates the new namespaces specified by the CLONE_NEW* bits in its flags argument and makes the caller a member of the namespaces. (As with clone(), unshare() provides functionality beyond working with namespaces that we'll ignore here.) The main purpose of unshare() is to isolate namespace (and other) side effects without having to create a new process or thread (as is done by clone()). Leaving aside the other effects of the clone() system call, a call of the form: clone(..., CLONE_NEWXXX, ....); is roughly equivalent, in namespace terms, to the sequence: if (fork() == 0) unshare(CLONE_NEWXXX); /* Executed in the child process */ One use of the unshare() system call is in the implementation of the unshare command, which allows the user to execute a command in a separate namespace from the shell. The general form of this command is: unshare [options] program [arguments] The options are command-line flags that specify the namespaces to unshare before executing program with the specified arguments. The key steps in the implementation of the unshare command are straightforward: /* Code to initialize 'flags' according to command-line options omitted */ unshare(flags); /* Now execute 'program' with 'arguments'; 'optind' is the index of the next command-line argument after options */ execvp(argv[optind], &argv[optind]); A simple implementation of the unshare command (unshare.c) can be found here. In the following shell session, we use our unshare.c program to execute a shell in a separate mount namespace. As we noted in last week's article, mount namespaces isolate the set of filesystem mount points seen by a group of processes, allowing processes in different mount namespaces to have different views of the filesystem hierarchy. # echo $$ # Show PID of shell 8490 # cat /proc/8490/mounts | grep mq # Show one of the mounts in namespace mqueue /dev/mqueue mqueue rw,seclabel,relatime 0 0 # readlink /proc/8490/ns/mnt # Show mount namespace ID mnt:[4026531840] # ./unshare -m /bin/bash # Start new shell in separate mount namespace # readlink /proc/$$/ns/mnt # Show mount namespace ID mnt:[4026532325] Comparing the output of the two readlink commands shows that the two shells are in separate mount namespaces. Altering the set of mount points in one of the namespaces and checking whether that change is visible in the other namespace provides another way of demonstrating that the two programs are in separate namespaces: # umount /dev/mqueue # Remove a mount point in this shell # cat /proc/$$/mounts | grep mq # Verify that mount point is gone # cat /proc/8490/mounts | grep mq # Is it still present in the other namespace? mqueue /dev/mqueue mqueue rw,seclabel,relatime 0 0 As can be seen from the output of the last two commands, the /dev/mqueue mount point has disappeared in one mount namespace, but continues to exist in the other. In this article we've looked at the fundamental pieces of the namespace API and how they are employed together. In the follow-on articles, we'll look in more depth at some other namespaces, in particular, the PID and user namespaces; user namespaces open up a range of new possibilities for applications to use kernel interfaces that were formerly restricted to privileged applications. (2013-01-15: updated the concluding remarks to reflect the fact that there will be more than one following article.) Brief items Full Story (comments: 44) Full Story (comments: 48) This is the second high-profile compromise of a Moin-based wiki reported recently; anybody running such a site should be sure they are current with their security patches. Full Story (comments: 2) Richard Biener has posted another report on the status of GCC 4.8.0 development. Noting that the code had "stabilized itself over the holidays," stage 3 of the development cycle is over, so "GCC trunk is now in release branch mode, thus only regression fixes and documentation changes are allowed now." Full Story (comments: none) Version 3.6.3 of GNU Radio has been released. This release adds "major new capabilities and many bug fixes, while maintaining strict source compatibility with user code already written for the 3.6 API." Enhancements include asynchronous message passing, new blocks for interacting with the operating system's networking stack, and the ability to write signal processing blocks in Python. Newsletters and articles Groklaw reports on an invitation from the United States Patent and Trademark Office (USPTO) for software developers to join two roundtable discussions aimed at "enhancing" the quality of software patents. Both events are in February: one in New York City and one in Silicon Valley. As Groklaw points out, the events are space-limited and proprietary software vendors are sure to attend. "Large companies with patent portfolios they treasure and don't want to lose can't represent the interests of individual developers or the FOSS community, those most seriously damaged by toxic software patents." (Thanks to Davide Del Vento) Taryn Fox has written a blog entry examining GNOME 3's global application menu, including a few outstanding problems, and a proposal for how they could be addressed. "Ideally, the App Menu will contain all of a given app's functionality. This is the assumption new GNOME apps (like Documents) are building on, and the one certain existing apps (like Empathy) are adopting." At his blog, Mozilla's Luke Crouch explores whether or not HTML applications deployed on "web runtime" environments offer a better experience than those running in a traditional browser. The answer is evidently no: "If you're making an HTML5 app, consider - do you want to make a native desktop application? Why or why not? Then consider if the same reasoning is true for the native mobile application." Page editor: Nathan Willis Next page: Announcements>> Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/531498/
CC-MAIN-2013-20
refinedweb
2,897
56.89
To. Here is a breakdown of the project we will be tackling: - A masonry grid of images, shown as collections. The collector, and a description, is attributed to each image. This is what a masonry grid looks like: - An offline app showing the grid of images. The app will be built with Vue, a fast JavaScript framework for small- and large-scale apps. - Because PWA images need to be effectively optimized to enhance smooth user experience, we will store and deliver them via Cloudinary, an end-to-end media management service. - Native app-like behavior when launched on supported mobile browsers. Let's get right to it! Setting up Vue with PWA Features A service worker is a background worker that runs independently in the browser. It doesn't make use of the main thread during execution. In fact, it's unaware of the DOM. Just JavaScript. Utilizing the service worker simplifies the process of making an app run offline. Even though setting it up is simple, things can go really bad when it’s not done right. For this reason, a lot of community-driven utility tools exist to help scaffold a service worker with all the recommended configurations. Vue is not an exception. Vue CLI has a community template that comes configured with a service worker. To create a new Vue app with this template, make sure you have the Vue CLI installed: npm install -g vue-cli Then run the following to initialize an app: vue init pwa offline-gallery The major difference is in the build/webpack.prod.conf.js file. Here is what one of the plugins configuration looks like: // service worker caching new SWPrecacheWebpackPlugin({ cacheId: 'my-vue-app', filename: 'service-worker.js', staticFileGlobs: ['dist/**/*.{js,html,css}'], minify: true, stripPrefix: 'dist/' }) The plugin generates a service worker file when we run the build command. The generated service worker caches all the files that match the glob expression in staticFileGlobs. As you can see, it is matching all the files in the dist folder. This folder is also generated after running the build command. We will see it in action after building the example app. Masonry Card Component Each of the cards will have an image, the image collector and the image description. Create a src/components/Card.vue file with the following template: <template> <div class="card"> <div class="card-content"> <img : <h4>{{collection.collector}}</h4> <p>{{collection.description}}</p> </div> </div> </template> The card expects a collection property from whatever parent it will have in the near future. To indicate that, add a Vue object with the props property: <template> ... </template> <script> export default { props: ['collection'], name: 'card' } </script> Then add a basic style to make the card pretty, with some hover animations: <template> ... </template> <script> ... </script> <style> .card { background: #F5F5F5; padding: 10px; margin: 0 0 1em; width: 100%; cursor: pointer; transition: all 100ms ease-in-out; } .card:hover { transform: translateY(-0.5em); background: #EBEBEB; } img { display: block; width: 100%; } </style> Rendering Cards with Images Stored in Cloudinary Cloudinary is a web service that provides an end-to-end solution for managing media. Storage, delivery, transformation, optimization and more are all provided as one service by Cloudinary. Cloudinary provides an upload API and widget. But I already have some cool images stored on my Cloudinary server, so we can focus on delivering, transforming and optimizing them. Create an array of JSON data in src/db.json with the content found here. This is a truncated version of the file: [ { "imageId": "jorge-vasconez-364878_me6ao9", "collector": "John Brian", "description": "Yikes invaluably thorough hello more some that neglectfully on badger crud inside mallard thus crud wildebeest pending much because therefore hippopotamus disbanded much." }, { "imageId": "wynand-van-poortvliet-364366_gsvyby", "collector": "Nnaemeka Ogbonnaya", "description": "Inimically kookaburra furrowed impala jeering porcupine flaunting across following raccoon that woolly less gosh weirdly more fiendishly ahead magnificent calmly manta wow racy brought rabbit otter quiet wretched less brusquely wow inflexible abandoned jeepers." }, { "imageId": "josef-reckziegel-361544_qwxzuw", "collector": "Ola Oluwa", "description": "A together cowered the spacious much darn sorely punctiliously hence much less belched goodness however poutingly wow darn fed thought stretched this affectingly more outside waved mad ostrich erect however cuckoo thought." }, ... ] The imageId field is the public_id of the image as assigned by the Cloudinary server, while collector and description are some random name and text respectively. Next, import this data and consume it in your src/App.vue file: import data from './db.json'; export default { name: 'app', data() { return { collections: [] } }, created() { this.collections = data.map(this.transform); } } We added a property collections and we set it's value to the JSON data. We are calling a transform method on each of the items in the array using the map method. Delivering and Transforming with Cloudinary You can't display an image using it's Cloudinary ID. We need to give Cloudinary the ID so it can generate a valid URL for us. First, install Cloudinary: npm install --save cloudinary-core Import the SDK and configure it with your cloud name (as seen on Cloudinary dashboard): import data from './db.json'; export default { name: 'app', data() { return { cloudinary: null, collections: [] } }, created() { this.cloudinary = cloudinary.Cloudinary.new({ cloud_name: 'christekh' }) this.collections = data.map(this.transform); } } The new method creates a Cloudinary instance that you can use to deliver and transform images. The url and image method takes the image public ID and returns a URL to the image or the URL in an image tag respectively: import cloudinary from 'cloudinary-core'; import data from './db.json'; import Card from './components/Card'; export default { name: 'app', data() { return { cloudinary: null, collections: [] } }, created() { this.cloudinary = cloudinary.Cloudinary.new({ cloud_name: 'christekh' }) this.collections = data.map(this.transform); }, methods: { transform(collection) { const imageUrl = this.cloudinary.url(collection.imageId}); return Object.assign(collection, { imageUrl }); } } } The transform method adds an imageUrl property to each of the image collections. The property is set to the URL received from the url method. The images will be returned as is. No reduction in dimension or size. We need to use the Cloudinary transformation feature to customize the image: methods: { transform(collection) { const imageUrl = this.cloudinary.url(collection.imageId, { width: 300, crop: "fit" }); return Object.assign(collection, { imageUrl }); } }, The url and image method takes a second argument, as seen above. This argument is an object and it is where you can customize your image properties and looks. To display the cards in the browser, import the card component, declare it as a component in the Vue object, then add it to the template: <template> <div id="app"> <header> <span>Offline Masonary Gallery</span> </header> <main> <div class="wrapper"> <div class="cards"> <card v-</card> </div> </div> </main> </div> </template> <script> ... import Card from './components/Card'; export default { name: 'app', data() { ... }, created() { ... }, methods: { ... }, components: { Card } } </script> We iterate over each card and list all the cards in the .cards element. Right now we just have a boring single column grid. Let's write some simple masonry styles. Masonry Grid To achieve the masonry grid, you need to add styles to both cards (parent) and card (child). Adding column-count and column-gap properties to the parent kicks things up: .cards { column-count: 1; column-gap: 1em; } We are close. Notice how the top cards seem cut off. Just adding inline-block to the display property of the child element fixes this: card { display: inline-block } If you consider adding animations to the cards, be careful as you will experience flickers while using the transform property. Assuming you have this simple transition on .cards: .card { transition: all 100ms ease-in-out; } .card:hover { transform: translateY(-0.5em); background: #EBEBEB; } Setting perspective and backface-visibilty to the element fixes that: .card { -webkit-perspective: 1000; -webkit-backface-visibility: hidden; transition: all 100ms ease-in-out; } You also can account for screen sizes and make the grids responsive: @media only screen and (min-width: 500px) { .cards { column-count: 2; } } @media only screen and (min-width: 700px) { .cards { column-count: 3; } } @media only screen and (min-width: 900px) { .cards { column-count: 4; } } @media only screen and (min-width: 1100px) { .cards { column-count: 5; } } Optimizing Images Cloudinary is already doing a great job by optimizing the size of the images after scaling them. You can optimize these images further, without losing quality while making your app much faster. Set the quality property to auto while transforming the images. Cloudinary will find a perfect balance of size and quality for your app: transform(collection) { const imageUrl = // Optimize this.cloudinary.url(collection.imageId, { width: 300, crop: "fit", quality: 'auto' }); return Object.assign(collection, { imageUrl }); } This is a picture showing the impact: The first image was optimized from 31kb to 8kb, the second from 16kb to 6kb, and so on. Almost 1/4 of the initial size; about 75 percent. That's a huge gain. Another screenshot of the app shows no loss in the quality of the images: Making the App Work Offline This is the most interesting aspect of this tutorial. Right now if we were to deploy, then go offline, we would get an error message. If you're using Chrome, you will see the popular dinosaur game. Remember we already have service worker configured. Now all we need to do is to generate the service worker file when we run the build command. To do so, run the following in your terminal: npm run build Next, serve the generated build file (found in the the dist folder). There are lots of options for serving files on localhost, but my favorite still remains serve: # install serve npm install -g serve # serve serve dist This will launch the app on localhost at port 5000. You would still see the page running as before. Open the developer tool, click the Application tab and select Service Workers. You should see a registered service worker: The huge red box highlights the status of the registered service worker. As you can see, the status shows it's active. Now let's attempt going offline by clicking the check box in small red box. Reload the page and you should see our app runs offline: The app runs, but the images are gone. Don't panic, there is a reasonable explanation for that. Take another look at the service worker config: new SWPrecacheWebpackPlugin({ cacheId: 'my-vue-app', filename: 'service-worker.js', staticFileGlobs: ['dist/**/*.{js,html,css}'], minify: true, stripPrefix: 'dist/' }) staticFileGlobs property is an array of local files we need to cache and we didn't tell the service worker to cache remote images from Cloudinary. To cache remotely stored assets and resources, you need to make use of a different property called runtimeCaching. It's an array and takes an object that contains the URL pattern to be cached, as well as the caching strategy: new SWPrecacheWebpackPlugin({ cacheId: 'my-vue-app', filename: 'service-worker.js', staticFileGlobs: ['dist/**/*.{js,html,css}'], runtimeCaching: [ { urlPattern: /^https:\/\/res\.cloudinary\.com\//, handler: 'cacheFirst' } ], minify: true, stripPrefix: 'dist/' }) Notice the URL pattern, we are using https rather than http. Service workers, for security reasons, only work with HTTPS, with localhost as exception. Therefore, make sure all your assets and resources are served over HTTPS. Cloudinary by default serves images over HTTP, so we need to update our transformation so it serves over HTTPS: const imageUrl = this.cloudinary.url(collection.imageId, { width: 300, crop: "fit", quality: 'auto', secure: true }); Setting the secure property to true does the trick. Now we can rebuild the app again, then try serving offline: # Build npm run build # Serve serve dist Unregister the service worker from the developer tool, go offline, the reload. Now you have an offline app: You can launch the app on your phone, activate airplane mode, reload the page and see the app running offline. Conclusion When your app is optimized and caters for users experiencing poor connectivity or no internet access, there is a high tendency of retaining users because you're keeping them engaged at all times. This is what PWA does for you. Keep in mind that a PWS must be characterized with optimized contents. Cloudinary takes care of that for you, as we saw in the article. You can create a free account to get started. This post originally appeared on VueJS Developers
https://cloudinary.com/blog/offline_first_masonry_grid_showcase_with_vue
CC-MAIN-2018-26
refinedweb
2,046
56.45
Thread.Sleep Method (Int32) zero (0) to indicate that this thread should be suspended to allow other waiting threads to execute. Specify Infinite to block the thread indefinitely. The thread will not be scheduled for execution by the operating system for the amount of time specified. This method changes the state of the thread to include WaitSleepJoin. This method does not perform standard COM and SendMessage pumping. The following example uses the Sleep method to block the application's main thread. using System; using System.Threading; class Example { static void Main() { for (int i = 0; i < 5; i++) { Console.WriteLine("Sleep for 2 seconds."); Thread.Sleep(2000); } Console.WriteLine("Main thread exits."); } } /* This example produces the following output: Sleep for 2 seconds. Sleep for 2 seconds. Sleep for 2 seconds. Sleep for 2 seconds. Sleep for 2 seconds. Main thread exits. */.
http://msdn.microsoft.com/en-us/library/d00bd51t.aspx
CC-MAIN-2013-20
refinedweb
141
63.05
. This quickstart shows you how to create a small App Engine app that displays a short message. For step-by-step guidance on this task directly in Cloud Shell Editor,. Additional prerequisites Initialize your App Engine app with your project and choose its region: gcloud app create --project=[YOUR_PROJECT_ID] When prompted, select the region where you want to locate your App Engine application. Install the following prerequisites: Run the following command to install the gcloud component that includes the App Engine extension for Python: gcloud components install app-engine-python Prepare your environment for Python development. It is recommended that you have the latest version of Python, pip, and other related tools installed on your system. For instructions, refer to the Python Development Environment Setup Guide.. Download the Hello World app We've created a simple Hello World app for Python To run the Hello World app on your local computer: Mac OS / Linux - Create an isolated Python environment: python3 -m venv env source env/bin/activate -: Windows Use PowerShell to run your Python packages. - Locate your installation of PowerShell. - Right-click on the shortcut to PowerShell and start it as an administrator. - Create an isolated Python environment. python -m venv env .\env\Scripts\activate - Navigate to your project directory and install dependencies. Python app to App Engine flexible environment!If you encountered any errors deploying your application, check the troubleshooting tips. basic one-file Flask app. from flask import Flask app = Flask(__name__) @app.route('/') def hello(): """Return a friendly HTTP greeting.""" return 'Hello World!' if __name__ == '__main__': # This is used when running locally only. When deploying to Google App # Engine, a webserver process such as Gunicorn will serve the app. app.run(host='127.0.0.1', port=8080, debug=True) app.yaml The app.yaml file describes an app's deployment configuration: #. The entrypoint tells App Engine how to start the app. This app uses gunicorn to serve the Python app as an alternative to Flask's development server (used when running locally). The $PORT variable is set by App Engine when it starts the app. For more information about entrypoint, see App==2.0.2 gunicorn==20.1.0 requirements.txt defines the libraries that will be installed both locally and when deploying to App Engine.
https://cloud.google.com/appengine/docs/flexible/python/quickstart?hl=lt&skip_cache=true
CC-MAIN-2021-49
refinedweb
380
50.43
Introduction in Material UI Icons In React Hello, Bonjour and Guten Tag to you. We have said it many times before and we won’t get tired of repeating that there are no small things when developing a project. The reasoning behind this belief we have is that in the modern Digital Age you don’t just sell your product. You also sell experience, convenience and emotion, because modern times constitute the market’s oversaturation with possibilities for a potential customer. That’s why every little thing counts. That being said, the focus of our today’s article – Material UI Icons, might very easily seem unimportant, but it is most definitely not. Material UI Icons, and particularly in connection to React, are there to help you perfect your project, whether it is a site, an app or anything else, and, thus, perfect your potential client experience, which can lead to the metaphorical scales of their choice weighing in your direction. Don’t get us wrong, we don’t say that they alone can do it for you. We are talking about them as a part of a whole complex of small things. And we would surely talk about each and every one of them in their own time. But today we focus on React Material UI Icons and only on them in order to do them proper justice. That means looking at: What they are; How you can use them with React; How to import a React Material-UI icon for your project. So, without further ado, let’s get to the first and the second points of the upper-mentioned list. What Is Material UI Icons and How to Use Them To put it bluntly, a Material UI Icons are a pack of ready-made icons that represents a command, file, device, or directory. They can also be used to represent frequent operations and savings, and are usually placed in application and/or toolbars. Such Icons are used with a purpose of creating an easy shortcut to any action on the site or in the app, as well as allowing to replace long word descriptions with an easily understandable icon for a user. Material UI Icons mainly consist of two components: SvgIcon and Icon. When it comes to Material UI Icons, the SVG component serves as the SVG path child and converts it into a React component. This React component allows you to customize the icon’s style and the reaction it makes after a click. It should also be mentioned that the size of said Icon should be 24×24 pixels, but we are getting ahead of ourselves. The second component, which is the Icon component, is there to display an icon from any ligature-enabled icon font. “What does it all have to do with React?” – you might ask. And the answer is quite simple: you can also use them when creating a project with React’s help, which is well and good, because it allows you to keep this task in your line of focus without a need to switch. And there are even no pitfalls, as the pack is ready-made and ready to use. Although, it should be said that Material UI Icons are not a be all and end all of UI Icons, as there are plenty of other packs on the market. So, why choose it? In our opinion, you should choose them, because they are slick, stylish, minimalistic, and are supported by all major platforms, as well as browsers. But the best part is that they were created by Google. And this mastodon of a corporation knows a thing or two about creating site components. So, there you have it. Now, let’s take a closer look at the process of creating and using said icons in your project. How to import a React Material UI icon for your project So, let’s say you are creating a website for your awesome project and you want to make it more colorful, vibrant and, dare we say it, more internationally accessible. That’s where Material UI Icons can come to your rescue, as they tick all of the upper mentioned boxes. So, first of all, here’s a little guide to how you can add the ready-made Material UI Icons into your project. Step 1. Installing Material UI framework. The first and foremost thing to do is to install Material UI framework in order to be able to work with all of its components. To do so, add one of the following command lines, depending on whether you do it with npm or yarn, into your project: npm install @material-ui/core yarn add @material-ui/core Step 2. Installing Material UI Icons. The next step here would be to install the icons themselves into the project’s catalogue. Once again, there are two ways to do it: through yarn or through npm: npm install @material-ui/icons yarn add @material-ui/icons These components use the Material UI SvgIcon component we have mentioned above to render SVG paths for each and every icon. This, in order, constitutes peer dependency on the next Material-UI release. Step 3. Importing Material UI Icons. After the installation of the Material UI Icons into your project’s catalogue, your next step would be to import them by using one of the following two methods: Method #1. This option would be safer than the second one, also it somewhat restricts the creative potential of the developer’s experience: import AccessAlarmIcon from ‘@material-ui/icons/AccessAlarm’; import ThreeDRotation from ‘@material-ui/icons/ThreeDRotation’; Method #2. This option is less safe, but, on the other hand allows an experienced developer more creative freedom: import { AccessAlarm, ThreeDRotation } from ‘@material-ui/icons’; By using one of these methods, we’ve imported Access Alarm and 3D Rotation icons into our project and you would be able to see them next time you boot up your project in their default variation. But keep in mind, that all of the icons in Material UI framework have five different variations: Filled variation (the default option); Outlined variation; Rounded variation; Twotone variation; And Sharp variation. So, if you want to use any of these variations, you would need to append the theme name to the icon name. Also keep in mind that while Material Design icons use “snake_case” naming, @material-ui/icons use “PascalCase” for the naming. Step 4. Adding CSS to Material UI Icons to change styles. Let’s assume that your project has its own YouTube channel and you would like to add the link to it to your site. Adding the full link would look rather unfitting on any site, so, using a Material UI icon of YouTube would be a fit here. And let’s also assume that for stylistic reasons you want it to be in red and white, just as the original logo. In that potential situation your next step would be to add CSS to your icon to make it appear the way you need. In that case your next move would be as follows: import React, { Component } from ‘react’ import YouTubeIcon from ‘@material-ui/icons/YouTube’; export class Maticon1 extends Component { render() { return ( <div> <AppBar className=”mrg” position=”static”> <Toolbar> <div style={{ ‘paddingLeft’: “600px” }}> Material UI Social media Icons</div> </Toolbar> </AppBar> <YouTubeIcon style={{ ‘color’: “red” }}/><br></br> </div> ) } } export default Maticon1 In this example, Maticon1 is the component where we add social media icons. After that, don’t forget to add reference of this component in app.js file by doing the following: import React from ‘react’; import logo from ‘./logo.svg’; import ‘./App.css’; import Maticon1 from ‘./Maticon1’ function App() { return ( <div className=”App”> <Maticon1/> </div> ); } export default App; And, the next time you run your project you will see a beautiful small Material UI Icon of YouTube in red and white. But what if you need an icon that is not in the default set of Material UI Icons? Well, in that case the SvgIcon wrapper we’ve already mentioned above would come to your rescue. It allows you to create custom SVG icons by extending the native <svg> element. Bear in mind that all the SVG elements should be scaled for a 24×24 pixels viewport. This way the resulting icon would be available to use as a child for other Material UI components that themselves use the icons, and would be available for customization with the viewBox attribute. You would also be free to apply any of the theme colors by using the color prop, as by default all the components inherit the current color. The code for customization would look the following way: function HomeIcon(props) { return ( <SvgIcon {…props}> <path d=”M10 20v-6h4v6h5v-8h3L12 3 2 12h3v8z” /> </SvgIcon> ); } And the code for color setting would look the following way: · <div className={classes.root}> <HomeIcon /> <HomeIcon color=”primary” /> <HomeIcon color=”secondary” /> <HomeIcon color=”action” /> <HomeIcon color=”disabled” /> <HomeIcon style={{ color: green[500] }} /> And, after adding these lines, your icons would look the following way: That is how you install and customize your Material UI Icons. Feel free to wield this power to decorate your project with all sorts of beautiful icons and somewhat secretly enrich your end-user’s experience with it. Conclusions to Have Coming to the end of this article, we are hoping that we’ve shown you that even the littlest of things are noteworthy in their own right on Material UI Icons example. But the best part about this whole ordeal is the fact that Material UI Icons are not even that small, as they help you project the idea of any of your project. Just like a mosaic that consists of lots and lots of small pieces that create a sight to see when working together. It should also be mentioned once again that they are just exceptionally easy to integrate, customize and shape to your needs, making Material UI Icons a must-try and, eventually, a go-to for any project development. And that is all for today. We wish you a very happy day, filled with small pleasant things to enjoy. And, as always, feel free to read up on any other of our articles. See you in the next article. You might also like these articles Top 20 Best React Website Templates for React Developers [Free and Premium] Best 10 IDEs for React.js for 2021 React vs. Vue: What Is Easier? What Is Trending? Detailed Guide With All +/- [2021] The post How to Use Material UI Icons In React appeared first on Flatlogic Blog.
https://online-code-generator.com/tag/material-ui-icons-2/
CC-MAIN-2022-40
refinedweb
1,773
58.72
Introduction. Python Garbage Collection As explained earlier, Python deletes objects that are no longer referenced in the program to free up memory space. This process in which Python frees blocks of memory that are no longer used is called Garbage Collection. The Python Garbage Collector (GC) runs during the program execution and is triggered if the reference count reduces to zero. The reference count increases if an object is assigned a new name or is placed in a container, like tuple or dictionary. Similarly, the reference count decreases when the reference to an object is reassigned, when the object's reference goes out of scope, or when an object is deleted. The memory is a heap that contains objects and other data structures used in the program. The allocation and de-allocation of this heap space is controlled by the Python Memory manager through the use of API functions. Python Objects in Memory Each variable in Python acts as an object. Objects can either be simple (containing numbers, strings, etc.) or containers (dictionaries, lists, or user defined classes). Furthermore, Python is a dynamically typed language which means that we do not need to declare the variables or their types before using them in a program. For example: >>> x = 5 >>> print(x) 5 >>> del x >>> print(x) Traceback (most reent call last): File "<mem_manage>", line 1, in <module> print(x) NameError : name 'x' is not defined If you look at the first 2 lines of the above program, object x is known. When we delete the object x and try to use it, we get an error stating that the variable x is not defined. You can see that the garbage collection in Python is fully automated and the programmer does not need worry about it, unlike languages like C. Modifying the Garbage Collector The Python garbage collector has three generations in which objects are classified. A new object at the starting point of it's life cycle is the first generation of the garbage collector. As the object survives garbage collection, it will be moved up to the next generations. Each of the 3 generations of the garbage collector has a threshold. Specifically, when the threshold of number of allocations minus the number of de0allocations is exceeded, that generation will run garbage collection. Earlier generations are also garbage collected more often than the higher generations. This is because newer objects are more likely to be discarded than old objects. The gc module includes functions to change the threshold value, trigger a garbage collection process manually, disable the garbage collection process, etc. We can check the threshold values of different generations of the garbage collector using the get_threshold() method: import gc print(gc.get_threshold()) Sample Output: (700, 10, 10) As you see, here we have a threshold of 700 for the first generation, and 10 for each of the other two generations. We can alter the threshold value for triggering the garbage collection process using the set_threshold() method of the gc module: gc.set_threshold(900, 15, 15) In the above example, we have increased the threshold value for all the 3 generations. Increasing the threshold value will decrease the frequency of running the garbage collector. Normally, we need not think too much about Python's garbage collection as a developer, but this may be useful when optimizing the Python runtime for your target system. One of the key benefits is that Python's garbage collection mechanism handles a lot of low-level details for the developer automatically. Why Perform Manual Garbage Collection? We know that the Python interpreter keeps a track of references to objects used in a program. In earlier versions of Python (until version 1.6), the Python interpreter used only the reference counting mechanism to handle memory. When the reference count drops to zero, the Python interpreter automatically frees the memory. This classical reference counting mechanism is very effective, except that it fails to work when the program has reference cycles. A reference cycle happens if one or more objects are referenced each other, and hence the reference count never reaches zero. Let's consider an example. >>> def create_cycle(): ... list = [8, 9, 10] ... list.append(list) ... return list ... >>> create_cycle() [8, 9, 10, [...]] The above code creates a reference cycle, where the object list refers to itself. Hence, the memory for the object list will not be freed automatically when the function returns. The reference cycle problem can't be solved by reference counting. However, this reference cycle problem can be solved by change the behavior of the garbage collector in your Python application. To do so, we can use the gc.collect() function of the gc module. import gc n = gc.collect() print("Number of unreachable objects collected by GC:", n) The gc.collect() returns the number of objects it has collected and de-allocated. There are two ways to perform manual garbage collection: time-based or event-based garbage collection. Time-based garbage collection is pretty simple: the gc.collect() function is called after a fixed time interval. Event-based garbage collection calls the gc.collect() function after an event occurs (i.e. when the application is exited or the application remains idle for a specific time period). Let's understand the manual garbage collection work by creating a few reference cycles. import sys, gc def create_cycle(): list = [8, 9, 10] list.append(list) def main(): print("Creating garbage...") for i in range(8): create_cycle() print("Collecting...") n = gc.collect() print("Number of unreachable objects collected by GC:", n) print("Uncollectable garbage:", gc.garbage) if __name__ == "__main__": main() sys.exit() The output is as below: Creating garbage... Collecting... Number of unreachable objects collected by GC: 8 Uncollectable garbage: [] The script above creates a list object that is referred by a variable, creatively named list. The first element of the list object refers to itself. The reference count of the list object is always greater than zero even if it is deleted or out of scope in the program. Hence, the list object is not garbage collected due to the circular reference. The garbage collector mechanism in Python will automatically check for, and collect, circular references periodically. In the above code, as the reference count is at least 1 and can never reach 0, we have forcefully garbage collected the objects by calling gc.collect(). However, remember not to force garbage collection frequently. The reason is that even after freeing the memory, the GC takes time to evaluate the object's eligibility to be garbage collected, taking up processor time and resources. Also, remember to manually manage the garbage collector only after your app has started completely. Conclusion In this article, we discussed how memory management in Python is handled automatically by using reference counting and garbage collection strategies. Without garbage collection, implementing a successful memory management mechanism in Python is impossible. Also, programmers need not worry about deleting allocated memory, as it is taken care by Python memory manager. This leads to fewer memory leaks and better performance.
https://stackabuse.com/basics-of-memory-management-in-python/
CC-MAIN-2019-43
refinedweb
1,171
56.05
#define GL_GLEXT_PROTOTYPES #include <GL/gl.h> #include <GL/glext.h> Will the updated Warp3D Nova and OpenGL ES 2.0 libraries from Enhancer 2.0 improve the overall compatibility of GL4ES? Will there be a new version of GL4ES? But if you need it right now, I can just upload it for you last libgl4es.a Yes please. Do you also have updated builds of libGLU_gl4es.a, libSDL_gl4es.a and libSDL2_gl4es.a? There is: http... Thx! Some Boing-Cake on the way ;) Are those special gles4 lib automatically picked up when i compile a ogles2 target?Especially interested in SDL1/2edit:oh wait, i think i'm mixing things up again... Made new full SDK, grab it ...
https://www.amigans.net/modules/xforum/viewtopic.php?post_id=124072
CC-MAIN-2021-31
refinedweb
117
72.53
I thought it would be nice to be able to produce a CSV file by doing something like this: string ordersCsv = orderRepository.GetAll().Select(o => new { OrderId = o.OrderId, Email = o.Email, OrderStatus = o.OrderStatus.Name, CreatedDate = o.CreatedDate, Total = o.Basket.Total }).AsCsv(); So here's an extension method to do just that: public static string AsCsv<T>(this IEnumerable<T> items) where T : class { var csvBuilder = new StringBuilder(); var properties = typeof (T).GetProperties(); foreach (T item in items) { string line = properties.Select(p => p.GetValue(item, null).ToCsvValue()).ToArray().Join(","); csvBuilder.AppendLine(line); } return csvBuilder.ToString(); } private static string ToCsvValue<T>(this T item) { if (item is string) { return "\"{0}\"".With(item.ToString().Replace("\"", "\\\"")); } double dummy; if (double.TryParse(item.ToString(), out dummy)) { return "{0}".With(item); } return "\"{0}\"".With(item); } It's work with anything that implements IEnumerable<T>, that includes the results of LINQ-to-SQL queries , arrays, List<T> and pretty much any kind of collection. Here's it's unit test: [TestFixture] public class EnumerableExtensionsTests { [Test] public void GetCsv_ShouldRenderCorrectCsv() { IEnumerable<Thing> things = new List<Thing>() { new Thing { Id = 12, Name = "Thing one", Date = new DateTime(2008, 4, 20), Child = new Child { Name = "Max" } }, new Thing { Id = 13, Name = "Thing two", Date = new DateTime(2008, 5, 20), Child = new Child { Name = "Robbie" } } }; string csv = things.Select(t => new { Id = t.Id, Name = t.Name, Date = t.Date, Child = t.Child.Name }).AsCsv(); Assert.That(csv, Is.EqualTo(expectedCsv)); } const string expectedCsv = @"12,""Thing one"",""20/04/2008 00:00:00"",""Max"" 13,""Thing two"",""20/05/2008 00:00:00"",""Robbie"" "; public class Thing { public int Id { get; set; } public string Name { get; set; } public DateTime Date { get; set; } public Child Child { get; set; } } public class Child { public string Name { get; set; } } } 11 comments: Cool stuff. One worry on the date format though. i think it would be better to use a portable format (i.e - insensitive to mmdd/ddmm and to timezones) so: if (item is DateTime) { return string.Format("{0:u}", item); } Hi Ken, Thanks, that's a good suggestion. There are other things that I really should do like escape commas properly and new lines. It's not really the most robust CSV implementation at the moment. Hi Mike, when I try to compile, I get an error "string does not contain a definition for 'With'..." at "return "\"{0}\"".With(item) ... any ideas? Luis. Hi Luisfx, Sorry 'With' is an extension method on string in place of the string.format statement. Just change the line to: return string.Format("\"{0}\"", item.ToString().Replace("\"", "\\\"")); And it should work fine. Mike, You haven't thought about the scenario where the thing you are converting has a string that has a comma in it! Woops, I just saw your comment! Hi Mike I'm also getting the error "No overload for method 'Join' takes '1' arguments", any idea why? Thanks Hi, I get the same error: "No overload for method 'Join' takes '1' arguments", Hi Anonymous, I can't remember now, but I may have written a little extension method for join. It's a simple matter to use string.Join() instead. if you want to had a line with the names of the linq properties do something like: if (UseHeader) { csvBuilder.Append(string.Join(",", (from a in properties select a.Name).ToArray())); } put this above the line that iterates over the ienumerable Changed your ToCsvValue function to handle nulls, and got rid of the custom Extension methods: private static string ToCsvValue(this T item) { if (item is string) { return string.Format("\"{0}\"", item.ToString().Replace("\"", "\\\"")); ; } double dummy; if (item == null) return ""; if (double.TryParse(item.ToString(), out dummy)) return string.Format("{0}", item); return string.Format("\"{0}\"", item); } Also, regarding Scott's post on getting the headers: Change Append() to AppendLine() to avoid losing the first row of data!
http://mikehadlow.blogspot.com/2008/06/linq-to-csv.html?showComment=1215533520000
CC-MAIN-2019-09
refinedweb
645
67.35
[ ] Robert Muir commented on SOLR-3204: ----------------------------------- {quote} It's 5 classes it looks like - and it's Apache licensed and there is no release - can't we simply suck this into our code base with a readme and JIRA about removing it and switching to a release when one occurs? Get the whole dependency thing right out of Maven's claws. {quote} Yes while this is the case, and as Emmanuel states, the purpose of his patch, its not a scalable solution in general. Its also expensive to fork software... there is always the danger that we then become out-of-sync and never sync back up. (feel free to hit me, for Uwe's example of snowball, which is a perfect example) What about the other cases that fall into this same 'maven namespace' category, e.g. the UIMA case for example? How to fix the general problem? Thats why I think that the only proper way to fix the bug is to attack it at the heart: Thats the fact that for maven project A to depend upon project B that is not in maven, maven project A must "publish" some "fake maven release" of project B. >,
http://mail-archives.apache.org/mod_mbox/lucene-dev/201203.mbox/%3C1025147714.34506.1331135097718.JavaMail.tomcat@hel.zones.apache.org%3E
CC-MAIN-2017-09
refinedweb
199
75.44
I am going to implement an interface but I don't want to override all of its methods, what should I do to not having some methods without body in my code? I am going to implement an interface but I don't want to override all of its methods, what should I do to not having some methods without body in my code? You just have to have an empty code body in your class. The hwol idea of an interface is to force you to implement the methods. Chris The interfaces could be used for the inheritance between unrelated classes that are not part of the same hierarchy or anywhere in the hierarchy. Using the interface you can specify what to do a class but not how. A class can implement multiple interfaces. An interface can extend one or more interfaces using the keyword extends. All data members are public interface, static and final by default. An interface method can have only public, default modifiers. You really need to override its methods... if dont want to have any code in its body, then just leave it blank. I've been doing some reading on GUI's recently, and they involve interfaces when implementing the listener and the source of the Event. How is this relevant to your original question I hear you ask :p well in the afforementioned case you can bypass writing all the methods of the interface in your listener class if you use an 'adapter'. This is itself a class that houses all the methods of the relevant interface with blank bodies, you then have your listener extend said adapter - and over-ride the methods of the interface you do want to actually use. The same process can be used in more general terms: Code Java: public interface MyInterface { void method1(); void method2(); void method3(); } Code Java: public class MyInterfaceAdapter implements MyInterface { MyInterfaceAdapter() { } void method1() {} void method2() {} void method3() {} } Code Java: And presto, we have a class (AnyRandomClass) that implements an interface by means of inheritance. There fore allowing it to take advantage of polymorphism and what not. From this point on, you can keep on creating classes that extend the this adapter and again not have to implement all the methods of the interface that you don't need.
http://www.javaprogrammingforums.com/%20object-oriented-programming/3915-interface-printingthethread.html
CC-MAIN-2013-48
refinedweb
385
58.52
Graph coloring problem is to assign colors to certain elements of a graph subject to certain constraints. Vertex coloring is the most common graph coloring… Read More » Hi All, Here is my interview experience with Amazon for internship. Hope it helps: Round 1: Online round with 20 objective questions on (Questions related… Read More » Write a program that compiles and runs both in C and C++, but produces different results when compiled by C and C++ compilers. There can… Read More » Although C++ is designed to have backward compatibility with C there can be many C programs that would produce compiler error when compiled with a… Read More » What are advantages of DBMS over traditional file based systems? Ans: Database management systems were developed to handle the following difficulties of typical Fille-processing systems… Read More » What is the difference between declaration and definition of a variable/function Ans: Declaration of a variable/function simply declares that the variable/function exists somewhere in the… Read More » What will be the maximum sum of 44, 42, 40, …… ? (A) 502 (B) 504 (C) 506 (D) 500 Answer: (C) Explanation: This is… Read More » Complete the sentence: Universalism is to particularism as diffuseness is to _________________ (A) specificity (B) neutrality (C) generality (D) adaptation Answer: (A) Explanation: Diffuseness means… Read More » Consider the FDs given in above question. The relation R is (A) in 1NF, but not in 2NF. (B) in 2NF, but not in 3NF.… Read More » In an IPv4 datagram, the M bit is 0, the value of HLEN is 10, the value of total length is 400 and the fragment… Read More » Consider the following sequence of micro-operations. MBR ← PC MAR ← X PC ← Y Memory ← MBR Which one of the following is a… Read More » What is the logical translation of the following statement? “None of my friends are perfect.” (A) A (B) B (C) C (D) D Answer: (D)… Read More » The transport layer protocols used for real time multimedia, file transfer, DNS and email, respectively are: (A) TCP, UDP, UDP and TCP (B) UDP, TCP,… Read More » Suppose p is the number of cars per minute passing through a certain road junction between 5 PM and 6 PM, and p has a… Read More » filter_none edit close play_arrow link brightness_4 code #include <stdio.h> extern int var = 0; int main() { var = 10; printf(“%d “, var); return 0;… Read More » Given a number ‘n’, how to check if n is a Fibonacci number. First few Fibonacci numbers are 0, 1, 1, 2, 3, 5, 8,… Read More » Given a number \’n\’, how to check if n is a Fibonacci number. First few Fibonacci numbers are 0, 1, 1, 2, 3, 5, 8,… Read More » filter_none edit close play_arrow link brightness_4 code #include<iostream> using namespace std; class Base { public: Base() { cout<<“Constructor: Base”<<endl; } virtual ~Base() { cout<<“Destructor :… Read More » Can a destructor be virtual? Will the following program compile? filter_none edit close play_arrow link brightness_4 code #include <iostream> using namespace std; class Base {… Read More » Can a constructor be virtual? Will the following program compile? filter_none edit close play_arrow link brightness_4 code #include <iostream> using namespace std; class Base {… Read More »
https://www.geeksforgeeks.org/easy/240/
CC-MAIN-2019-39
refinedweb
534
52.02
Python; an ImportError is raised. Any other exceptions raised are simply propagated up, aborting the import process. The find_spec() method of meta path finders is called with two or three). Changed in version 3.4: The find_spec() method of meta path finders replaced find_module(), which is now deprecated. While it will continue to work without change, the import machinery will try it only if the finder does not implement find_spec(). before the loader executes the module code. This is crucial because the module code may (directly or indirectly) import itself; adding it to sys.modules beforehand prevents unbounded recursion in the worst case and multiple loading in the best. - If loading fails, the failing module – and only the failing module – gets removed from sys.modules. Any module already in the sys.modules cache, loader does not define create_module(), before the loader executes the module code, to prevent unbounded recursion or multiple loading. - If loading fails, the loader must remove any modules it has inserted into sys.modules, but it must remove only the failing module, and only if the loader itself has loaded it explicitly. import machinery automatically sets __path__ correctly for the namespace package.. As mentioned previously, Python comes with several default meta path finders. One of these, called the path based finder (PathFinder),_spec()_spec() method as described previously. When the path argument to find_spec() (PathEntryFinder)_spec() spec, which is then used when loading the module.. Howevever, if find_spec() is implemented on the path entry finder, the legacy methods are ignored.. If both find_loader() and find_module() exist on_spec()
http://docs.python.org/3.4/reference/import.html
CC-MAIN-2014-10
refinedweb
260
50.23
Power of the color wheel Discussion in 'HTML' started by Travis Newbury, Sep 12, 2007.75 - Wes Groleau - Aug 21, 2003 disable mouse scroll wheel > prevent problem with dropdownnicholas, Dec 16, 2004, in forum: ASP .Net - Replies: - 1 - Views: - 5,045 - Kevin Spencer - Dec 16, 2004 def power, problem when raising power to decimals, Apr 16, 2008, in forum: Python - Replies: - 8 - Views: - 368 - Mark Dickinson - Apr 17, 2008 Changing font color from current font color to black colorKamaljeet Saini, Feb 13, 2009, in forum: Ruby - Replies: - 0 - Views: - 403 - Kamaljeet Saini - Feb 13, 2009 color wheel pickerAndrew Poulos, May 11, 2005, in forum: Javascript - Replies: - 3 - Views: - 345 - Richard Cornford - May 11, 2005
http://www.thecodingforums.com/threads/power-of-the-color-wheel.536782/
CC-MAIN-2014-35
refinedweb
114
50.54
The Enable with Segment Button is a fast and easy way for new customers to start sending data to your app, as soon as they sign up. If you’ve already integrated with Segment as a partner, you can implement the button in your setup-flow now. If you haven’t integrated with us yet, you can learn more about joining our platform here. Without the Segment button, a customer who wants to activate your product via Segment would need to visit the Segment website, choose a project to send data from, go to the Segment dashboard, find your service amongst hundreds of other destinations and copy-paste API keys from your interface. We saw that many users were confused going through this process! This explainer video will show a customer's current confusion, and what an experience would look like with the new button. The Enable with Segment Button will improve your activation rate with Segment customers because it simplifies the activation process. Example Flow Install the Button Implementing the Segment button and improving your activation experience is easy. The bare minimum to get the button functional is to include your "Destination ID" as well as the settings we will use to configure the destination. Since both of these rely on Segment internals, below is a little tool to help you discover those values. Add HTML Add Settings The settings can vary a lot from destination to destination, so in order to give you the most flexibility, we've opted to use a raw JSON string for this attribute. (for example: data-settings='{"apiKey":"abc123"}') When you choose your destination above, we will give you a list of all the settings we offer to our customers to configure their destination. (required settings will be in bold) Your JSON object can use any of the keys specified above. Available Options Integration ID ( data-integration) The Integration ID will let us know which destination your user is interested in activating. Select your app from the dropdown in the Install the Button section to make sure you know what your Integration ID is. Set your Integration ID in the data-integration attribute on the button element. Settings ( data-settings) In order for a customer to successfully send data from their Segment data hub to your app, we’ll need the proper configuration for your customer. Select your app from the dropdown in the Install the Button section to make sure you know what configuration options are available for your destination. The settings will need to be encoded as a raw JSON string in the data-settingsattribute. Redirect URL ( data-redirect-url) The redirect URL can be anything you want it to be. It’s where we’ll send customers after successfully starting the flow of data from Segment to your app. This could be a page for further setup and activation, or a dashboard where they’ll be able to start seeing data. Note that if a redirect URL is not specified, we will simply bring the user to the configuration page inside our own app so that they can setup Segment on their site, servers, or app. Similarly, if the user has not started sending data via Segment, we will ignore your redirect url so that the customer can setup Segment. This prevents confusion about why Segment or the destination isn't working. Button Size ( data-size) The button has one visual style but can be easily displayed in three different sizes: large, small and medium (the default). Set the size of your Segment Activation button via the data-size attribute. Advanced Usage If you need more flexibility, you can also use the raw JS API for the button. When the script above is loaded, it will create a segment global namespace. When the script loads, it will automatically initialize the button for any .segment-enable-button elements it finds on the page. If you don’t necessarily have your button ready on page load, you can also trigger this magic yourself using segment.EnableButton.init(). Otherwise, you can initialize new buttons yourself using segment.EnableButton as a constructor: Available Configuration Options: Most of the configuration options below correspond to the HTML options above, except being in camelCase instead of being hyphenated. Generating Settings with a Function In some cases, it might not be possible to know the settings ahead of time. In this case, we've allowed for settings to be passed as a function, which will be invoked when the user clicks the button, but before they are taken to Segment: You can also work asynchronously, simply add an argument to your function. This will be passed in as a callback function. (it only accepts a single argument, which is the settings object you've generated) When using a function, errors will need to be handled directly by your application. <script src="/docs/client/partners/enable-button/index.js" async></script> <script src="//static.segment.com/enable-button/v1/index.js" async></script> Examples of Button Placement If you have any questions or see anywhere we can improve our documentation, please let us know or kick off a conversation in the Segment Community!
https://segment.com/docs/guides/partners/enable-with-segment.md/
CC-MAIN-2018-17
refinedweb
864
50.77
In the previous articles in this series (see part 1, part 2, part 3, and part 4), you learned how to use Pygame and Python to spawn a playable hero. gets called along with each new level. It requires some modification so that each time you create a new level, you can create and place several enemies: class Level(): def bad(lvl,eloc): if lvl == 1: enemy = Enemy(eloc[0],eloc[1],'enemy = [300,0] PyCharm. Code so far For you reference, here's the code so far: #! hit_list: self.health -= 1 print(self.health) self.counter = 0 def move(self): """ enemy movement """ distance = 80 speed = 8 if self.counter >= 0 and self.counter <= distance: self.rect.x += speed elif self.counter >= distance and self.counter <= distance*2: self.rect.x -= speed else: self.counter = 0 self.counter += 1 class Level(): def bad(lvl, eloc): if lvl == 1: enemy = Enemy(eloc[0], eloc) world.blit(backdrop, backdropbox) player.update() player_list.draw(world) enemy_list.draw(world) for e in enemy_list: e.move() pygame.display.flip() clock.tick(fps) 4 Comments
https://opensource.com/comment/157166
CC-MAIN-2022-21
refinedweb
179
63.15
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. Xcode Manipulation module This is an Xcode Manipulation module maintained by the Unity mobile team. The module is bundled with Unity 5.0 and newer. The documentation of the version bundled with Unity is provided here. The open source version of the code is put in UnityEditor.iOS.Xcode.Custom namespace by default to avoid conflicts with the DLL bundled within Unity. Contribution Please use name your branch other than 'stable' or 'dev' for pull requests. The branch 'dev' is preferred pull request target. Pull requests to 'stable' branch will be first merged to 'dev' branch. By doing a pull request to this repository you accept to release your code under the MIT license (see the LICENSE file).
https://bitbucket.org/Unity-Technologies/xcodeapi/overview
CC-MAIN-2019-04
refinedweb
141
68.36
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. I need to know more about the Odoo 8 Reporting system (dashboards) I'm trying to develop custom human ressources evaluation dashboards using a custom module. I took the existing hr_evaluation dashboard as a start, and try to do such developpement. A- Python side : So I found that I have to create a report model, "hr_evaluation_report" that takes fields from models involved on the dashboard (here the hr_evaluation_evaluation and hr_evaluation_interview models), and if needed some fields used to display computed results on the dashboard (here the delay_date and overpass_delay), and this using depends, as the following code demonstrates : In this file I found also a init method. I need to know more about this init method : 1- Is it mandatory ? 2- Is it serving only for performing a sql request to make data available for the reporting model and then initialize it? 3- I see that it reacts on views displaying using tools.drop_view_if_exists, so what are all the purposes for using this method ? B- View side : On the hr_evaluation_view.xml We can see that : 1- Initializes the graph view 2- Making Filters and group by availables 3- Adding dashboard menuitem, and action allowing to open the corresponding view. But I can't see : How to customize the measure selectio box ? Image showing the measures selection box And a final question about this button sequence : Image showing the sequence button Does this buttons sequence come by default (using inheritance maybe) ? Every suggestion, tutorials, additions, hints will be welcome. ------------------------------------------------- EDIT -------------------------------------------------------------------------------------------------------------------- @Axel Mendoza : I developped my custom module for reporting, without the init method, and I set _auto=True, so Odoo will create the sql table for me on the database. Here is the main module files : 1. hr_evaluation_report_extension.py On this new created model, I want to get the necessary fields needed to retrieve data, the purpose is to have the goals reaching percentage per employee, and having the ability to group by employee, department, job, ... , So the 'reach' field is the one measurable, because it's the only one with float type. So I needed to put 3 models on the depends list, but there is no relationnal fields between the three, Is it necessary to have data consistency ? because on the example I followed, when using the init method, they did a LEFT JOIN to have data, what did I need in case I don't use the init method ? from import openerp.osv fields, osv class hr_evaluation_report_goals(osv.Model): _name = "hr.evaluation.report.goals" "Goals reaching Statistics" _description = _auto = True _columns = { 'create_date': fields.datetime('Create Date', readonly=True), 'plan_id': fields.many2one('hr_evaluation.plan','Plan' , readonly=True), '), 'state': fields.selection([ ('draft''Draft', ), ('wait''Plan In Progress', ), ('progress''Final Validation', ), ('done''Done', ), ('cancel''Cancelled', ), ], 'Status', readonly=True), } _depends = { 'hr.employee': ['name_related','job_id','department_id','statut'], 'survey.user_input': ['reach'], 'hr_evaluation.evaluation': [ 'create_date','plan_id','state'], } The problem is there is no data on the sql table created by Odoo : Image showing the sql request result 2. hr_evaluation_report_extension_view.xml On the view side I put on the graph tag the name_related (employee name) field on the rows, and create_date (employee evaluation create date) on the col, and the field reach as measure. besides, I created the search view for filters and group by. <?xml version="1.0" encoding="utf-8"?> <openerp> <data> <record id="view_goals_reaching_graph" model="ir.ui.view"> <field name="name">hr.goals.reaching</field> <field name="model">hr.evaluation.report.goals</field> <field name="arch" type="xml"> <graph string="Goals reaching analysis" type="pivot" stacked="True"> <field name="name_related" type="row"/> <field name="create_date" type="col"/> <field name="reach" type="measure"/> </graph> </field> </record> <record id="view_goals_reaching_search" model="ir.ui.view"> <field name="name">hr.goals.search</field> <field name="model">hr.evaluation.report.goals</field> <field name="arch" type="xml"> <search string="Pourcentage atteinte globale des objectifs"> <filter string="En cours" domain="[('state', '=' ,'wait')]" help = "In progress Evaluations"/> <filter string="Attente appréciation DRH" domain="[('state','=','progress')]" help = "Final Validation Evaluations"/> <filter string="Terminé" domain="[('state','=','done')]"/> <field name="name_related"/> <field name="plan_id"/> <group expand="0" string="Extended Filters..."> <field name="state"/> <field name="create_date"/> <field name="department_id"/> <field name="job_id"/> <field name="statut"/> </group> <group expand="1" string="Group By"> <filter string="Employé" name="employee" context="{'group_by':'name_related'}"/> <filter string="Campagne" context="{'group_by':'plan_id'}"/> <filter string="Direction" context="{'group_by':'department_id'}"/> <filter string="Fonction" context="{'group_by':'job_id'}"/> <filter string="statut" context="{'group_by':'statut'}"/> <separator/> <filter string="Month" context="{'group_by':'create_date:month'}" help="Mois de création"/> </group> </search> </field> </record> <record id="action_evaluation_goals_reaching" model="ir.actions.act_window"> <field name="name">Atteinte des objectifs</field> <field name="res_model">hr.evaluation.report.goals</field> <field name="view_type">form</field> <field name="view_mode">graph</field> <field name="search_view_id" ref="view_goals_reaching_search"/> </record> <menuitem action="action_evaluation_goals_reaching" id="menu_evaluation_goals_reaching" parent="hr.menu_hr_reporting" sequence="3" groups="base.group_hr_manager"/> </data> </openerp> -------------------------------------------------------------------EDIT 2------------------------------------------------------------------------------------------------- Here after the report model : class hr_evaluation_report_goals(osv.Model): _name = "hr.evaluation.report.goals" _description = "Goals reaching" _auto = False _columns = { 'evaluation_plan_id': fields.many2one('hr_evaluation.plan , 'Appraisal Plan' ), '), } _depends = { 'hr.employee': ['id','name_related','job_id','department_id','statut','evaluation_plan_id'], 'survey.user_input': ['reach'] } Here is my init method : def init(self, cr): tools.drop_view_if_exists(cr, 'hr_evaluation_report_goals') cr.execute(""" create or replace view hr_evaluation_report_goals as ( min(e.id) as id, e.name_related, e.evaluation_plan_id, e.department_id, e.job_id, e.statut, s.reach from survey_user_input s INNER JOIN hr_employee e on (e.id=s.employee_id) GROUP BY e.name_related, e.evaluation_plan_id, e.department_id, e.job_id, e.statut ) """) I get this error : ProgrammingError: column "s.reach" must appear in the GROUP BY clause or be used in an aggregate function LINE 10: s.reach How to store my goals reaching value in the sql view, if it's not syntaxclty correct ? @Yassine TEIMI I will try to explain all your questions. First that's not what it's called dashboards or boards in Odoo, those are a group of several actions/views that are shows in columns for the same menu. Your case is a graph view of the model hr.evaluation.report. That model will not create an SQL table due to the _auto = False, it get the data from on an SQL view, that's why it need to use the init method that it's called when installing or updating the module to create the tables or views to use for the model. The init method is just called if it's defined for the model, and it's needed if you define _auto = False because that means that Odoo will not create the table for you. The init method only get the database cursor and you could do what you need to do in that method. The statement tools.drop_view_if_exists it's just an utility method that execute: cr.execute("DROP view IF EXISTS %s CASCADE" % (viewname,)) that could be done in the same view creation using the "create or replace view". The purpose is to update the view always when the module is updated In the view part, the measure select box it will use all the fields defined as integer or float in the model, I've done some extensions to be able to use others kind of fields to create a kind of cross-table using this type of graph pivot view but it will require to override the read_group method, by default in Odoo only works for integer or float fields. The buttons sequence are part of the graph widget of Odoo, the first button group in the image is for the types of graph that you can switch to. Hope this helps ***Update for completeness*** Two more details: When Odoo create the table for you(_auto = True by default) and not using an SQL view, you need to insert the data to use it using the way you prefers, direclty insert records via UI form, sql or etl operations, python model create calls, XMLRPC calls to model create method, etc When you set _auto = False and create an SQL view to use as the model table, you need to specify an unique id field value as a return for your SQL view query, it's mandatory since Odoo will performs search for those ids before select the real data. Thanks a lot for your explainations, it's helpfull. Now I get the picture, I'll try to develop my extension, and post the result here. I have one more question, when Odoo creates the sql table for you, does it require something to be able fill data on the sql table created ? When _auto is set to false, you perform the sql request via the init method, but when it's not set to false what does it requires so as to get data correctly on the sql table created on database by Odoo ? This is my main question, I'll fill all details on y question EDIT. Thanks Axel. That was something I think to write you but I don't do it, I post an update explaining that and another detail that I just left out Thanks a lot, I get the picture now, I think it will be too painfull for me to automate data creation on the "real" sql table created by Odoo, So I think I'll use the _auto=False, and use the init method. What do you think is the best solution in my case ? Normally you should base your report in a model that contains the most low level data, to be able to navigate to the upper levels through many2one fields. For example if you wanna make a report for product sales you need to put your report in the sale.order.line model because it will contains the low level of the sale data with the possibility to use the product_id, sale_id, sale_id.partner_id or reach the data you need though many2one relations fields, if you don't have a low level data model that have the needed relations to access the data you need to display it's a perfect scenario to create an SQL view to retrieve all the data you need to display with the special concern of the id field I referred in the answer Okay Axel, Thanks for this explainations, So I used the init method, and I've got an sql error, I updated my question (see EDIT 2) for error details. Thanks. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now You don't need an SQL view for your report, you could directly use the model survey.user_input that contains all the data you need, just add the necessary related fields and you could delete the model hr.evaluation.report.goals that you are creating. That's the scenario that I describe before when you could access all the data to display in the report by using a low level model that have the many2one relations to fit your needs Ah okay! So I have to add all relationnal fields from hr.employee model to the survey.user_input model (the one I should base my report on), and override the survey.user_input create method to insert data into relationnal fields columns, and thats 'all I have all necessary data on the survey.user_input to base my report on, correct me if I'm wrong please. Gracias! I meant related fields not relationnal fields ... Axel, For testing purposes, I used the sql request, and it worked, I know there is no need to do so, using just the survey.user_input model with relationnal fields from hr.employee would be enough (I'll wait to complete my discussion with you about that), so it worked but I need to do some little adjustments, details here : Thanks a lot Axel for your help. Looking at your query you just need to have inserted the employee_id to be able to access the data from hr.employee, the rest of the data could be accessed using related fields that don't need to be inserted by you
https://www.odoo.com/forum/help-1/question/i-need-to-know-more-about-the-odoo-8-reporting-system-dashboards-94455
CC-MAIN-2017-30
refinedweb
2,040
52.49
django-form-utils variety of small template filters that are useful for giving template authors more control over custom rendering of forms without needing to edit Python code: label, value_text, selected_values, optional, is_checkbox, and is_multiple. -. Installation Install from PyPI with easy_install or pip: pip install django-form-utils is tested on Django 1.4 and later and Python 2.6, 2.7, and 3.3. It is known to be incompatible with Python 3.0, 3.1, and 3.2. ImageWidget requires the Python Imaging Library. sorl-thumbnail or easy-thumbnails is optional, but without it full-size images will be displayed instead of thumbnails. The default thumbnail size is 200px x 200px.. If you set fieldsets on a BetterModelForm and don't set either the fields or exclude options on that form class, BetterModelForm will set fields to be the list of all fields present in your fieldsets definition. This avoids problems with forms that can't validate because not all fields are listed in a fieldset. If you manually set either fields or exclude, BetterModelForm assumes you know what you're doing and doesn't touch those definitions, even if they don't match the fields listed in your. A BetterForm or BetterModelForm will add a CSS class of "required" or "optional" automatically to the row_attrs of each BoundField depending on whether the field is required, and will also add a CSS class of "error" if the field has errors. Rendering A possible template for rendering a BetterForm: {% if form.non_field_errors %}{{ form.non_field_errors }}{% endif %} {% for fieldset in form.fieldsets %} <fieldset class="{{ fieldset.classes }}"> {% if fieldset.legend %} <legend>{{ fieldset.legend }}</legend> {% endif %} {% if fieldset.description %} <p class="description">{{ fieldset.description }}</p> {% endif %} <ul> {% for field in fieldset %} {% if field.is_hidden %} {{ field }} {% else %} <li{{ field.row_attrs }}> {{ field.errors }} {{ field.label_tag }} {{ field }} </li> {% endif %} {% endfor %} </ul> </fieldset> {% endfor %} One can also access the fieldset directly if any special casing needs to be done, e.g.: {% for field in form.fieldsets.main %} ... {% endfor %} django-form-utils also provides a convenience template filter, render. It is used like this: {% load form_utils %} {{ JQUERY_URL AutoResizeTextarea requires the jQuery Javascript library. By default, django-form-utils links to the most recent minor version of jQuery 1.8. Note that a relative JQUERY_URL is relative to STATIC_URL.
https://bitbucket.org/carljm/django-form-utils/src
CC-MAIN-2014-15
refinedweb
380
50.73
I’ve inherited a Django development project and am having A LOT of trouble getting it to work on my local Ubuntu 18.04 machine. The project was started in 2013 and has gone through bursts of development by various people. I need to copy it to my local machine before I start cleaning it up, upgrading and deleting all the old sql dumps that seem to be scattered everywhere amongst other obvious clutter. I need the working site as a reference so i don’t want to start messing around with it on the current server. I am pretty new to Django and only recently completed one of those 4 hour YouTube tutorials. I think I could probably start coding a fresh site and would have some idea about deploying it onto a web server but getting an existing site with out of date software to run on my local machine is a whole other story. I am trying to decide on the best way forward… - Do I keep trying to get it working on my local machine by installing old software and then upgrade it (a large amount of learning about old systems and the result might just be an old and out of date instance of the project)? OR - Do I start from zero on a fresh install of django and then rebuild it using the existing site only as a reference (it wouldn’t instantly be working and I might have to learn about some things I would otherwise not have to touch but at least I’d end up with something that works and is up to date)? OR - Is there a simple fix that would get the existing site to run on a new virtual environment (start with what exists and modify it as necessary)? Option 3 is my ideal if its possible. The project is hosted on Digital Ocean. ** Overall System Specs OS = Ubuntu 14.04.1 LTS Vhosts folder contains four folders - development, staging, production and a folder named after one of the previous developers. All vhosts appear to be accessible as sub-domains. I have copied all the files down from the www folder. Python 2.7.6 - Current is 3.8.5 PostgreSQL 9.3.4 - Current is 12.3 Virtualenv 1.11.4 - Current is 20.0.28 ** Sub-domains/folders in the vhosts folder Development appears to be the most current version of the project. All others are so far behind as to not really be considered of value. There are two virtual environment folders inside the development folder. One of them appears to be a copy of the same virtual environment that is in all other vhost folders. I can’t get “pip freeze” or “pip list” to work on them so not sure what is going on there. The other is likely to be the one most recently used. I couldn’t get “pip freeze” to work on it but I could get “pip list” to work. Most relevant specs listed below. ** Virtual Environment specs Python 2.7.6 - Current is 3.8.5 Django 1.7 - Current is 3.0.8 psycopg2 2.4.5 - Current is 2.8.5 pip 1.5.4 - Current is 20.2 ** Database Structure There appears to be three databases named with a sequence similar to “Filename” “Filename1” and “Filename2” I have downloaded them all one at a time using the command “pg_dump -U <source_database_name> -f <destination_filename>.sql” Through a few headaches I managed to get the database to upload to my local Postgre installation. I modified the settings.py file in the django project to be able to access the database. When attempting to run the development server, the errors appeared to switch after this happened. Funnily enough, the database name in the settings.py file on Digital Ocean does not match any of the names of the databases yet the system runs. The username and password for the database do match. I don’t know what to make of that except to say that I had to create a database of same name to the one on Digital Ocean just to get the database dump to upload onto my local Postgre. ** Where the troubles are I have activated the virtual environment that is with the project files. I have also tried creating a new virtual environment. Either way when I try “python manage.py runserver” I seem to be getting the following error: django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named psycopg2 When I try installing psycopg2 I get the following error: Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: psycopg2 in /usr/lib/python3/dist-packages (2.8.5) I’ve checked the site-packages folders in both the new and old virtual environment paths and they both seem to contain a psycopg2 folder. On my local Ubuntu 18 system I appear to have folders for Pythong 2.7, 3, 3.6, 3.7 and 3.8 in the “/usr/lib/” folder. I’ve checked for “dist-packages” folders in all of the python version folders and psycopg2 folder only exists in 3. Just as a test I tried: “python3 manage.py runserver” I got the following error. Traceback (most recent call last): File “manage.py”, line 8, in from django.core.management import execute_from_command_line ModuleNotFoundError: No module named ‘django’ I should point out that I can get all newly created django projects to run with PostGreSQL as the defined database and their servers run and post the default page to a browser. Its just this imported site I am having a lot of trouble with. If anyone can offer a solution it would be greatly appreciated.
https://forum.djangoproject.com/t/copying-old-site-to-localhost-isnt-working/3737
CC-MAIN-2022-21
refinedweb
960
65.52
[SOLVED]Scope of variable QML\JS Hi everybody. I am from Russia, and I don,t now english well, so please don,t kick me too much for this. I am just starting learning QML so don`t kick me for stupid questions too. I have main.qml page which contains just one Rectangle element and Image element, which is used like close button - @ import QtQuick 1.1 import com.nokia.symbian 1.1 import "lib.js" as LibJs Rectangle{ ..... Component.onCompleted: LibJs.initView("MainMenu.qml") .... } @ MainMenu.qml contains- @ import QtQuick 1.1 import com.nokia.symbian 1.1 import "lib.js" as LibJs Rectangle { height: 300 anchors.verticalCenterOffset: 0 anchors.verticalCenter: parent.verticalCenter rotation:90 Button { id: button1 height: 40 text: "Начать" anchors.horizontalCenter: parent.horizontalCenter onClicked: LibJs.initView("DeckMemory.qml") } } @ File lib.js which is imported into main.qml contains code - @ var currentView; function initView(file){ try{ currentView.destroy(); }catch(e){} var c = Qt.createComponent(file); currentView = c.createObject(window,{}); } @ Everything works ok, exept one thing - variable currentView is undefined when I click on a Button which is in a MainMenu.qml, when I click it next time it is defined and contains Rectangle object, but it is not what I want, because the first loaded object doesn`t destroyed and I see menu when the game is started. Tell me if you need more information to understand my problem, as I told I,m new in QML and I gave those info which is usefull as I think. I have a guess that it can be because of imports of lib.js in all files which I use. But if it is so, how can I use one import, just in main.qml? If I use import of lib.js just in main.qml functions which I call in other files do not whork... I'm a bit confused about what the underlying question is, but here is some information which I hope will help: if you are importing the same js file from multiple QML files, and want it to be "shared", you need to specify (at the top of the js file) ".pragma library". That will ensure that it has a single, shared context which is used no matter which QML file it is imported from. Note that a library js cannot access symbols from parent contexts, so "currentView" (whatever that is) might not be visible / available. You'll have to pass currentView as a function parameter instead. depending on what "currentView" is there may be parenting / ownership issues. Is it a property of another QML element, or what? What is "window" - is it the id of another element? Cheers, Chris. chriadam - Thanks for this information, but I can`t figure out how to use it for resolving my problem. If I uses ".pragma library" as you said, I can t use qml objects in my js code. Problem now is I don,t understand how can I pass "currentView" as a function parametr. "currentView" - must be qml object which even havent got an id because I can t set "id" whith a createObject() method. So question is how can I access to dynamicly created qml object, which havent got "id" from qml document? - "currentView" - js variable which contains qml object(I m trying to create a simple game, so currentView contains such things like game menu, game field, results screen etc. it is not an array, so each time it contains one of this things) "window" - id of PageStackWindow element which located in main.qml(in my example of code in a first post I didnt show it, because I thought that it is not important) May be I should post all my code here, since it`s realy difficult to explain my problem well on a language which is foreign for me. P.S May be there is more valid way to manage templates of the application, if it is, give me a link on a material please. Before c++\qt(qml) I was working with a PHP\JS\HTML\CSS for about two years and it`s difficult to me to understand how can I manage templates without "include" function. As I thought there is very simple and convinient way to make template management in qml. Using PageStack element gives me all posible freedom I needed. I can imagine how much my english is bad, so I didn`t expected for quick answer. Thanks for your help chriadam, your comment was helpfull for me in a process of understanding QML.
https://forum.qt.io/topic/15112/solved-scope-of-variable-qml-js
CC-MAIN-2018-30
refinedweb
758
66.13
The official blog for Windows Server Essentials and Small Business Server support and product group communications. SBS-Related Links EPS Team Blogs [Today's post comes to us courtesy of Wayne McIntyre] Under certain circumstances, you might be unable to manage DHCP and get a screen like the one shown below. Among a number of causes, this could be the result of having a record for the servers IP address in your hosts file which resolves to a name that does not exist in Active Directory. Since DHCP does a reverse lookup on the binding IP (your internal server’s IP) to discover the computer object in AD to authorize it for DHCP services, a return of an invalid hostname will cause the authorization to fail. You can verify this as well by performing a ping test as follows to verify the bad resolution of the servers IP to its name. On this example, we will assume that the server’s internal IP is 192.168.16.2 and the server name is SERVER.CONTOSO.LOCAL. C:\Windows\System32\drivers\etc>ping -a 192.168.16.2 Pinging badhosts.record.com [192.168.16.2] with 32 bytes of data: As you can see from the output, the reverse name resolution for the server’s IP does not match the server’s name and thus won’t match the AD object. Resolution: To resolve this simply delete the record out of your hosts file which is found in %windir%\system32\drivers\etc. The default hosts file would look like this: # If you need to create an alternate record for your servers local IP, we recommend that you first read on how SBS 2008 automatically creates DNS records for internal name resolution using your external namespace when you run the Internet Address Management Wizard as this most likely will cover most of the scenarios. Furthermore, you can also access an SBS 2008 server using the host name “sites” (I.E.: \\sites or) For more information on the IAMW and how it configures the SBS server, please check the following links: thank you
http://blogs.technet.com/b/sbs/archive/2009/08/20/unable-to-manage-dhcp-when-an-invalid-name-record-exists-in-your-hosts-file.aspx
CC-MAIN-2015-18
refinedweb
351
56.59
I realize this is not completely an RE activity and want to get your feedback on how/where to communicate about the other parts. This is based on long time request from l10n team. - they need to translate info.xml for uc modules; not sure if they need to do it for all modules - but info.xml is not in l10n kits so they need to get, in the large nbms file, the info.xml of each module they need to translate for then they need to send by mail to RE the translated file REQUEST: 1. have the info.xml in nb cvs so it can be added to l10n.list of each applicable module (or l10n.list.uc) so then it will be in the regular or uc l10n kit. 2. they will putback the info.xml to translatedfiles/src - so that RE can use that info.xml in the building of the ml nbms. QUESTION for #1, is it technically possible for the info.xml to go to nb cvs ? if so, then I think that dev needs to add it to l10n.list or l10n.list.uc BTW, where do the info.xml files live ? is it in a cvs, or are they dynamically created from a template when a given nbm is built ? ALTERNATE IDEA ? there is one large nbm file in the builds - if one had a list of the names of each nbm that needed to have info.xml localized, a script could extract the info.xml for each and to a directory. But there would be a namespace collision since all info.xml are in Info/info.xml. But perhaps the directory name could be the basename of each nbm, so that each info.xml would go in a unique location. then these files could be provided as a special kit, and then putback to translatedfiles in the same structure, and used for building the ml nbms ? Please allow me add a comment. I'm always delivering translated info.xml for UC releases, but I'm not doing the translation. Because all the strings in that xml are contained message resource files Bundle.properties. What I always do is: 1. Pick up appropriate strings from Bundle_ja.properties or Bundle_$locale.properties for other locales 2. replace English string with the above string 3. license information is English in recent releases I don't think we have to use CVS and l10n-kit for info.xml localization. English info.xml is always generated from MakeNBM ant task in the build script, so I would like to request to generate localized info.xml (e.g. info_ja.xml, info_zh_CN.xml) from MakeNBM ant task as same as English xml. Please see another issue for the autoupdate: Since autoupdate has been fixed to support localized info.xml, I guess the following tasks can be automated: - Generate localized info.xml - re-package NBMs with the localized info.xml Would you give your any thoughts? I filed this based on feedback over the years from various members of translation team as well as recent feedback from your project management. Can I ask that translation team discuss among yourselves to see if its still something that is wanted (info,xml in kits and related processes) - its not about anything else. Please let me know in some separate mails since base team should not spend time doing this if its not important to have; it was communicated to me in past that it was important to have, ken.frank@sun.com As per keiichio, all the information that is required for building an info.xmml are already part of the workspace and also part of l10n kit. Therefore, the repository/kit issue, which used to exist, seems to have been resolved by itself over time. There are still two issues that need to be addressed: - Even though all the individual pieces of data are available, localized info.xml files themlseves are still not prepared by the build script. This forces the l10n teams to prepare them manually and to store the generated info.xml files on their servers for later reference. As keiichio has mentioned, this problem can be solved if MakeNBM is modified to produce localized info.xml automatically. - Once the localized info.xml files are present, the question is how should the nbms be produced? 1. A single nbm can be produced with all the info.xml files. Since au client now supports multiple info.xml files in a single nbm, this should work. 2. Prepare a separate nbm for each locale with code+lang_bundles+lang_info_xml. 3. Prepare a separate nbm for each locale but only with lang_bundles+lang_info_xml. Then we can use the process outlined in. I think we should file a separate RFE to discuss both the above issues. Pl. inform if you agree and i will file a new RFE. I don't recall seeing info.xml in l10n kits and I don't recall seeing them in l10n.lists and thus localized nbms wont be built from using the info.xml translated files put back into translatedfiles -- and all that is the context of this issue. I realize that getting the info.xml into the cvs and into the lists is activitiy of developers but this issue is about initiating that activity to happen, not from filing separate issues on each module but a unified approach with dev team leadership. Can we keep discussion of the above items to this issue as filed ? For the other things mentioned, I do suggest filing a separate issue(s) ken.frank@sun.com There is alternate idea. First of all I have to say that the info.xml file is *generated file* and thus is not supposed to be part of L10Nkit. Don't get sad, there's a solution. It's generated from some properties file. It's very likely that such a "Localizing-Bundle" is regular part of L10Nkit and thus translated and available somewhere in translatedfiles/src. I've also heard there's possibility to have Info/info_${locale}.xml inside of NBM file. So the alternate idea is to utilize the information described above and generate ML NBMs with multiple translated info.xml files by default. Moving to RE queue for further re-assignments. Changing from DEFECT to ENHANCEMENT. Targeting to Milestone 11 for now. As an alternative to producing ML nbms with multiple jars and info.xml files, have you considered building individual ML nbms each targeting a language, as described in ? Reassigning to me. Checking in nbbuild/antsrc/org/netbeans/nbbuild/MakeNBM.java; /cvs/nbbuild/antsrc/org/netbeans/nbbuild/MakeNBM.java,v <-- MakeNBM.java new revision: 1.78; previous revision: 1.77 done <makenbm> is used in the external build harness. Will this patch have any effect on existing build scripts? It is too large for me to follow what it is doing. Jesse, it should be safe for external build harnesses. I'm sorry if following description is cryptic, feel free to ask for further explanation. The change has added support to allow <makenbm locales="${locales}"/> attribute and the task now reads OpenIDE-Module-Localizing-Bundle manifest value and tries to locate localized versions of that bundle in localized module jarfile(s). For example you can have module jarfile "modules/org-foo-bar.jar" with localizing Bundle "org/foo/bar/Bundle.properties" and some localized jarfiles in "modules/locale/org-foo-bar_${locale}.jar", which contain "org/foo/bar/Bundle_${locale}.properties" files (each localized jarfile can have one localized localizing bundle file. These localized jarfiles are checked for existence of such localized localizing bundle and if found, their values are used for creation of localized info.xml file in Info/locale/info_${locale}.xml file. These localized info.xml files are then added to NBM.
https://netbeans.org/bugzilla/show_bug.cgi?id=98893
CC-MAIN-2015-32
refinedweb
1,306
68.77
tag:blogger.com,1999:blog-67814291577753172212012-04-12T11:30:37.583-04:00AtlantaMDF ReviewsBooks, tools, and other materials relevant to SQL Server.Stu some SSAS - Expert Cube Development with Microsoft SQL Server 2008 Analysis ServicesRecently finished (cover-cover) reading of Expert Cube Development with Microsoft SQL Server 2008 Analysis Services. Since then I've gone through a number of chapters several times as topics aligned themselves with work efforts. Looking at this book today, I noticed the covers are all curled, the pages dog-eared, and yellow highlighter is visible across multiple chapters. In my view of all things <em>bibliothèque</em> – that indicates value for my dollar..<br /><br />Overall I would highly recommend this book to anyone with some SSAS project experience that wants to take it up a notch. Note - this is not a book for a beginner, there is not much in the way of introductory material, just a bunch of usable info around designing, tuning, securing, productionization, and monitoring SSAS cubes in SQL 2008.<br /><br />There are definitely some authors opinions sprinkled across the book, and some of the designs give one pause. For instance, the chapter on calculations included a design for calculation dimensions that had me scratching my head a bit – particularly around how supportable (or explainable) it would be in my particular shop.<br /><br />On the other hand, I think the value of chapter 8 alone - Query Performance Tuning, and its discussions of performance, partitioning, aggregation design, and MDX calculation performance provided value to me beyond the purchase price of the book.<br /><br />Is the book perfect – not! Is it the best SSAS cube development treatment I have read to date – absolutely!<div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1"/>DBAMark 2007 Developer’s Guide to Business Data Catalog<br /><br /><a href=""><img style="MARGIN: 0px 10px 10px 0px; WIDTH: 150px; FLOAT: left; HEIGHT: 186px; CURSOR: hand" id="BLOGGER_PHOTO_ID_5407504330618725490" border="0" alt="" src="" /></a><br />SharePoint 2007 developer’s Guide to Business Data Catalog is an essential guide to anyone who wishes to become proficient in building complex SharePoint applications. This book starts off assuming that the reader is a complete beginner to BDC, and explains in detail what it is. Soon afterwards, the reader is taught what can be done with it and how.<br /><br />The book is divided into 11 chapters. Chapter 1 provides information for what BDC is. The BDC is a layer of defining heterogeneous data sources so that the SharePoint is aware of them. The data sources can be any of those: Microsoft SQL Server, Oracle, ODBC (Open Database Connectivity) and Web Services. Chapter 2 discusses the Application Definition File (You can think of it as being a configuration file). The ADF is used to tell SharePoint to get the data from those different data sources. Chapters 3 through Chapter 7 examine the security which allows users to use different authentication methods within the BDC and the out-of-the-box functionality of BDC can be configured through the application definition file. I think the configuration is the responsibility of SharePoint system administrator, but the lucid writing style of the book makes it easy to understand by most developers. Chapters 8 through Chapter 11 delve deep into the customized solutions which are really the job for developers. This book provides the namespace and DLL and sample code to allow developers easily to follow.<br /><br />Most SharePoint books will spend one or two chapters to describe the BDC; this is the first book I have seen that provides a detailed analysis of a specific concept of SharePoint. Users should be aware of that the BDC is a component that can be used only on Microsoft Office SharePoint Server 2007 Enterprise Edition.<br /><br />Reviewed by Henry<div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1"/>wildcat Art of Unit Testing<p><img style="margin: 5px 0px; display: inline" align="left" src="" /> <a href="" target="_blank">The Art of Unit Testing</a> (Manning Press, 2009).  Although many of the examples in the latter half of the book were a little over my head, I do appreciate the fact that Osherove builds to that point in a simple, easy-to-follow manner.</p> <p>The first chapter was particularly useful to me, because the author lays out some very basic criteria for defining a unit test, which I’ll paraphrase below:</p> <ul> <li>Is the test repeatable after a period of time (years, months, etc?) </li> <li>Is the test portable?  Can other team members run the same unit test? </li> <li>Is the test simple to run?  Can it be run with the push of a button, and in just a few minutes? </li> <li>Is the test simple to build? </li> </ul> <p.</p> <p.</p> <div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1"/>Stu Server 2008 Administration in Action<p><img style="DISPLAY: inline; MARGIN: 5px" src="" align="left" /> Rod Colledge’s book SQL Server 2008 Administration in Action is a great asset for most SQL Server DBA’s; it covers a variety of issues in a simple-to-understand format. Let me preface this post by saying that while I am a DBA, I’ve been doing mostly development work for the last few years. However, armed with this book, I felt like I could easily dive back into administrative duties. </p><p>The book is broken up into three sections, each with several supporting chapters: Planning and Installation, Configuration, and Operations. Each chapter does a relatively deep dive into issues by starting with basic definitions, moving quickly through options and concepts, and then finally wrapping up with a bulleted list of best-practice considerations. If I had but one suggestion to make for the publishers, it would be that they consider consolidating all of the checklists into one document. What is probably the most amazing fact to me is that the book is very short (compared to some other administrative guides); it’s only 440 pages, including the index. It’s just well-written, straight to the point, and focuses on just the important stuff.</p><p>I also like the Appendices at the end, especially the Top 25 WORST practices, and the basic schedule in Appendix B. Although it’s probable that most DBA’s will have much more complicated schedules than described herein, it’s a useful template (especially for those “accidental DBA’s”). Overall, I really liked this book; I think it will be very helpful for me in the future (especially as I begin to study for the 2008 certification exams).</p><p>Here's <a href="">an interview</a> with the author: </p><p>Also, you can download a few sample chapters:</p><p>Sample chapter 4 <a href="">Installing and upgrading SQL Server 2008</a><br />Sample chapter 10 <a href="">Backup and recovery</a><br /></p><p>Stu</p><div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1"/>Stu Text Editor 15<p><a href="" target="_blank"><img style="display: inline; margin-left: 0px; margin-right: 0px" src="" align="left" /></a> Most SQL Server DBA’s are used to dealing with data that comes in via some other method than a database connection; flat files, csv’s, and XML files are all too common.  However, as databases get larger, and XML becomes more prevalent, I don’t think I’m alone in suggesting that Notepad doesn’t really cut it as a text editor anymore.</p> <p>Quick case in point; last week I was charged with importing a 32 Meg XML file into a database so that we could use some of the values to update one of our older security scanning tools. Unfortunately, while the XML file was valid (e.g., all tags were closed), SQL Server was screaming about certain characters not being valid XML characters.  Rendering the file in IE 7 killed IE.  Wordpad would open the file, but 32 megs of text yields something along the lines of 300,000 characters; not exactly easy to scan and edit.</p> <p>Enter UltraEdit.  I downloaded the 45-day free trial, and went to work.  SQL Server identified the line and character position of the invalid characters in the xml column in my scratch table; I opened the file in UltraEdit, used the Goto Line command (including the column number) and discovered the first invalid character: the trademark symbol, or ™.  I edited it out, used the find and replace feature to find the rest of them and do the same, and uploaded the file to SQL Server again.  Using XQuery on the new imported contents gave me a new character position, so I repeated the process.  After about 10 repeats (less than an hour of time), I had a clean XML file, and was off to the races.</p> <p>UltraEdit was also very useful when querying the file; I could use the XML manager to explore and identify the nodes far better than the XML parser included in SQL Server Management Studio (which choked when trying to open the whole XML value).  This was a great help when attempting to write valid XQuery statements to select particular nodes, attributes, and values from the database.</p> <p>I realize that UltraEdit is more than an XML editor, and that there are probably better XML editors out there, but I was very pleased with how easy it was to use it for this particular project.  I’m hoping that I’ll have time to more fully explore it’s capabilities for authoring scripts, etc, in the future.  The cost for a license is only $49.95, and it was well worth it to solve this particular problem.</p> <div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1"/>Stu to become an Exceptional DBA (Second Edition)<p><a href="" target="_blank"><img style="display: inline; margin-left: 0px; margin-right: 0px" align="left" src="" /></a> I downloaded this free <a href="" target="_blank">ebook</a> from RedGate and quickly perused it in a couple of hours; Brad McGehee lays out an interesting set of observations about what is necessary to transform from an average DBA to an exceptional one.  To be honest, there weren’t any hidden secrets here; most of the book deals with personal work habits (which you probably already have or you wouldn’t be interested in being an exceptional DBA).</p> <p>There were a few gems here and there, like Brad’s suggestions on how to become an MVP, as well as how to manage your career; however, even these were not necessarily new ideas, just general concepts that were slimmed down and located in a single easy-to-use guide.  I don’t regret reading this book, but at the same time, I didn’t really walk away with any new insights either.</p> <p>If you’re a new DBA or are planning to transition to a DBA role, you may find his overview of the career path useful.  Also, if you’re just now beginning to expand your skillset, there is some basic information on how to do that.  However, if you’re already working long hours solving problems and you enjoy your job, this ebook may not be the best use of your time.  You’ve probably already encountered these same ideas along the way; spend your time building a network of database associates instead.</p> <h4>Pros:</h4> <ul> <li>FREE!  Can’t beat the price! </li> <li>Concise explanation of what makes a DBA “exceptional” </li> <li>Overview of community-building </li> </ul> <h4>Cons</h4> <ul> <li>Nothing really new under the sun </li> </ul> <div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1"/>Stu in a Nutshell- 3rd Edition<p><a href="" target="_blank"><img style="margin: 0px 10px 0px 0px; display: inline" align="left" src="" width="138" height="207" /></a> Kevin Kline’s SQL in a Nutshell (like many of the technical books from O’Reilly) is one of those essential desktop reference books that every programmer should have access to; it covers several different flavors of SQL, including MySQL, Oracle, PostgreSQL, and SQL Server. The nice thing about this book is that it uses the 2003 ANSI SQL as the foundation, and then attempts to tie in the various flavors back to that source.  While this gives you a great overall picture of how the various database platforms interact with another, it does make it a bit difficult to translate from one dialect to another.  In other words, if you know how CHARINDEX works in SQL Server, it’s difficult using the book to figure out a comparable function in MySQL.</p> <p>Despite this limitation, this book does provide a very useful codex for the major dialects of SQL.  IT’s worth having a copy if you are skilled on one platform and need to interact with another flavor of SQL.</p> <p>3 of 5 stars.</p> <div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1"/>Stu Optical Mouse and Multimedia Keyboard Combo<p><a href="" target="_blank"><img style="DISPLAY: inline; MARGIN-LEFT: 0px; MARGIN-RIGHT: 0px" height="77" alt="8338569 Front Detail" src="" width="225" align="left" border="0" /></a>My wireless mouse and keyboard died last night. I replaced the batteries, and got nowhere with it; since I needed the keyboard and mouse for work today and this was the THIRD wireless mouse to die in the last two years for me, I made a quick trip to my local Best Buy to buy a wired keyboard and mouse combo. I sit near my computer, so it made sense (why keep wasting batteries?).</p><p>Wired keyboards and mice are hard combo to find; there was really only one option at the store, and it was the Dynex Optical Mouse and Multimedia Keyboard. I’ve never been a big fan of Dynex products (you’ll find out why in a minute), but since I really had no choice, I decided to pick one up. It was cheap (you get what you pay for), and the display model looked OK. I bought it, and left it on my desk for installation this morning.</p><p>Install went fine (mostly); I plugged in the keyboard and the mouse into the USB connections on my KVM switch, and I immediately had a basic keyboard and mouse functioning. However, this is a MULTIMEDIA keyboard and mouse combo; there’s extra buttons on both, and I wanted to make them work. Since there was no install disk, I headed over to the <a href="" target="_blank">Dynex web site</a> to find drivers. I’ve done this before for other Dynex products (hence my distaste for the brand), so I thought it was going to be relatively easy.</p><p>Well, the combo wasn’t listed as an option for download, so I looked for drivers for the individual components; the mouse was easy: <a href="" target="_blank">DX-WMSE</a>, and there’s the driver right on the page. Download, unzip, install. Done. The keyboard? <a href="" target="_blank">DX-WKBD</a>. No driver on the page. Hmmm. Click the Drivers link on the bottom left of the page, and then the Input Devices drop down. No listing for my keyboard; also, no listing for the mouse driver either. At this point, I begin swearing softly.</p><p>They have a support number, which I call, and I speak to a very nice representative named Mike, who proceeds to tell me that the keyboard doesn’t require a driver. I tell him that the basic functions are working, but none of the multimedia keys work; he asks me to uninstall the keyboard and reinstall, and then finally utters that panacea of support technicians everywhere – REBOOT. I tried all of those, and got nowhere. I thanked him for his time, and hung up. Since I don’t need the multimedia functions, I can make do until I need a replacement. When I do need a replacement, it probably won’t be a Dynex.</p><p>PROS: Cheap keyboard and mouse combo. Basic functions work well.</p><p>CONS: <strike>No drivers available for the multimedia functionality.</strike> Other reviews seem to indicate that it does work out of the box, but I couldn’t get it to work for me. </p><p>EDIT: I’ve revised my rating, because I determined that the USB switch on my KVM switch was interfering with the device discovery; when I plugged the keyboard directly into a PC, the functionality worked. I still think that there should be some sort of drivers associated with the keyboard, but it’s unfair to give this a low rating because of that.</p><p><strike>2 of 5 stars</strike> 4 of 5 stars</p><div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1"/>Stu and Relational Theory: How to Write Accurate SQL Code<p><img style="DISPLAY: inline; MARGIN-LEFT: 0px; MARGIN-RIGHT: 0px" src="" align="left" /> </p><p.</p><p.</p><p.</p><p.</p><p>4 out of 5 stars.</p><div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1"/>Stu Celko's SQL Programming StyleJust finished this book and it's really worth the read. You sometimes know that you are supposed to do things a certain way or avoid other ways. This gives you insight into why and brings up things you didn't think of. Even if you don't agree with everything, it's still worthwhile to get a different perspective.<div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div><img src="" height="1" width="1"/>DonW
http://feeds.feedburner.com/AtlantaMDFReviews
crawl-003
refinedweb
3,058
57.2
PaigesPaiges OverviewOverview Paiges is an implementation of Wadler's "A Prettier Printer". The library is useful any time you find yourself generating text or source code where you'd like to control the length of lines (e.g. paragraph wrapping). The name Paiges is a reference to the Paige compositor and the fact that it helps you layout pages. Quick StartQuick Start Paiges supports Scala 2.11, 2.12, and 2.13. It supports both the JVM and JS platforms. To use Paiges in your own project, you can include this snippet in your build.sbt file: // use this snippet for the JVM libraryDependencies += "org.typelevel" %% "paiges-core" % "0.3.0" // use this snippet for JS, or cross-building libraryDependencies += "org.typelevel" %%% "paiges-core" % "0.3.0 Paiges also provides types to work with Cats via the paiges-cats module: // use this snippet for the JVM libraryDependencies += "org.typelevel" %% "paiges-cats" % "0.3.0" // use this snippet for JS, or cross-building libraryDependencies += "org.typelevel" %%% "paiges-cats" % "0.3.0" DescriptionDescription This code is a direct port of the code in section 7 of this paper, with an attempt to be idiomatic in scala while preserving the original code's properties, including laziness. This algorithm is optimal and bounded. From the paper: Say that a pretty printing algorithm is optimal if it chooses line breaks so as to avoid overflow whenever possible; say that it is bounded if it can make this choice after looking at no more than the next w characters, where w is the line width. Hughes notes that there is no algorithm to choose line breaks for his combinators that is optimal and bounded, while the layout algorithm presented here has both properties. Some selling points of this code: - Lazy, O(1) concatenation - Competitive performance (e.g. 3-5x slower than mkString) - Elegantly handle indentation - Flexible line-wrapping strategies - Functional cred ;) ExamplesExamples Here's an example of using Paiges to generate the source code for a case class: import org.typelevel.paiges._ /** * Produces a case class given a name and zero-or-more * field/type pairs. */ def mkCaseClass(name: String, fields: (String, String)*): Doc = { val prefix = Doc.text("case class ") + Doc.text(name) + Doc.char('(') val suffix = Doc.char(')') val types = fields.map { case (k, v) => Doc.text(k) + Doc.char(':') + Doc.space + Doc.text(v) } val body = Doc.intercalate(Doc.char(',') + Doc.line, types) body.tightBracketBy(prefix, suffix) } val c = mkCaseClass( "Dog", "name" -> "String", "breed" -> "String", "height" -> "Int", "weight" -> "Int") c.render(80) // case class Dog(name: String, breed: String, height: Int, weight: Int) c.render(60) // case class Dog( // name: String, // breed: String, // height: Int, // weight: Int // ) For more examples, see the tutorial. BenchmarksBenchmarks The Paiges benchmarks are written against JMH. To run them, you'll want to use a command like this from SBT: benchmark/jmh:run -wi 5 -i 5 -f1 -t1 bench.PaigesBenchmark By default the values reported are ops/ms (operations per millisecond), so higher numbers are better. The parameters used here are: -wi: the number of times to run during warmup -i: the number of times to benchmark -f: the number of processes to use during benchmarking -t: the number of threads to use during benchmarking In other words, the example command-line runs one thread in one process, with a relatively small number of warmups + runs (so that it will finish relatively quickly). OrganizationOrganization The current Paiges maintainers are: People are expected to follow the Typelevel Code of Conduct when discussing Paiges on the Github page or other official venues. Concerns or issues can be sent to any of Paiges' maintainers, or to the Typelevel organization. LicenseLicense Paig.
https://index.scala-lang.org/typelevel/paiges/paiges-cats/0.3.0?target=_2.13
CC-MAIN-2020-34
refinedweb
611
65.42
Chris Chiappa <chris@chiappa.net> writes: Hi Chris, > Something in my configuration recently changed and tramp broke. The > symptom was that the output getting retrieved from the remote system > always seemed to have garbage control characters preceding it, ie: > > [?2004l"Linux 4.1.12-124.19.2.el7uek.x86_64" > > I eventually tracked it down to this in my .inputrc: > > set enable-bracketed-paste on > > It doesn't seem like there's an easy way from bash itself to turn off > bracketed paste for "dumb" terminals, but I was able to do it from > inputrc with: > > $if term=xterm > set enable-bracketed-paste on > $endif > > The emacswiki has some mention of zsh users having issues (ironically, > one suggested workaround is to force tramp to use bash), but a mention > of this might be useful. Or maybe there is a way to get emacs to > interpret the escape sequences so as not to confuse tramp? Tramp tries to do its best. However, since it doesn't know which shell runs remotely, there's not too much for general purpose. If Tramp knows, which shell will run on the remote side, it will be more specific. For bash, it will add "-noediting -norc -noprofile" to the shell invocation, which should fix your problem. See variable `tramp-sh-extra-args'. In order to let Tramp know which shell to use, set the connection property "remote-shell", as described in the Tramp manual. For example, you could say --8<---------------cut here---------------start------------->8--- (add-to-list 'tramp-connection-properties (list "/ssh:slc16ilg:" "remote-shell" "/usr/bin/bash")) --8<---------------cut here---------------end--------------->8--- Best regards, Michael.
https://lists.gnu.org/r/tramp-devel/2021-03/msg00008.html
CC-MAIN-2021-43
refinedweb
270
61.77
CFD Online Discussion Forums ( ) - OpenFOAM Paraview & paraFoam ( ) - - Postprocessing a specific boundary ( ) christian March 16, 2007 10:23 To postprocess my data I would To postprocess my data I would need help to do the following: 1) I want to write the following to separate text files: i) The pressure data of each wall cell face of a specific boundary. ii) The coordinates of the wall faces in that boundary. iii) The area of each wall face. 2) I want to sum the wall pressure times face area of the specific boundary. How can I do this without using the text files created in (1)? I'm a rookie using OpenFOAM. This means you can't be too clear when answering my questions. All answers, even if you answer only one of my questions, are very welcome. Best regards, Christian hjasak March 16, 2007 10:42 Should be easy. - make an o Should be easy. - make an object of type OFstream , giving it the file name - pick out the patch you want to write in (I will call its index patchID ) - write: Before main(), add: #include "OFstream.H" and at the point where you want to write: OFstream of("myFile.txt"); of << p.boundaryField()[patchID] << endl; of << mesh.C().boundaryField()[patchID] << endl; of << mesh.Sf().boundaryField()[patchID] << endl; If you want each into a separate file, make yourself multiple OFstream objects For the sum, do: vector sumForce = sum(p.boundaryField()[patchID]*mesh.Sf().boundaryField()[patchID]); Could it be easier? Hrv christian March 16, 2007 11:19 Thank you very much. First of Thank you very much. First of all I wonder where am I supposed to learn about how to do the things you just taught me? Now to some questions regarding your reply: 1) In which file am I supposed to insert the lines you suggested me? 2) Let's talk about writing the wall face pressure data, wall face coordinates and size. How is each value connected to a specific wall face? Is there a column next to the data value column saying face no 2 of cell no 21345? I mean, I need to be able to keep track of which face area that belongs to which face pressure. The values will of course be listed in the same order, but assume I want to pick a specific cell face by its number. 3) Why is the data type vector in "vector sumForce = sum(..."? The answer will be a real value. I haven't been using C++ before. 4) Assume instead I want to find data values of the cells of the boundary (not the faces). What command will I use then? 5) I've created a cut plane through my boundary where a do a contour plot. Now I simply want to step through my time steps and create an animation. How do you suggest me to do this? Should I save images and use some other software to create an animation (which?) or should I create an .mp2 animation right away? I want to run the animation on a Windows system. Best regards, Christian hjasak March 16, 2007 13:55 ...where am I supposed to lear ...where am I supposed to learn... The obvious answer is to look for examples in OpenFOAM, because I or other people must have done something similar already. If you need to be better at this, you can attend some training, come to the Workshop or (especially if you are working in a company using OF for commercial work) get some OpenFOAM support to help you produce quality results on time. Feel free to send me an E-mail if you want to talk about this. Anyway, the level of things you are looking for is still easy: so, play, try to guess, learn by example etc. It's not that hard, really. How is each value connected... In the example, I am writing the values for all boundary faces. Face centres, face areas and everything else will be ordered in the same manner. Thus the first pressure belongs to the first face and the first face centre - all the lists are ordered the same. data type vector in "vector sumForce = sum(..."... Physics, physics! Pressure is a scalar and it acts normal to the surface, right? Therefore, force is a vector because it depends on the orientation of the surface - you really *should* know this. ...find data values of the cells... The cell values are in p.internalField() . If you ask the patch for faceCells() , it will give you a cell index next to each boundary face. You know the rest... ...and create an animation... Create a series of images, and convert them into an animation using some package. I am using convert on Linux, which is a part of ImageMagick and make an mpeg video. Enough for now I think... Hrv christian March 22, 2007 03:26 I see. My first question conce I see. My first question concerned writing pressure data, coordinates and area of specific boundary faces to a file. I was given the lines necessary to do this. Now, I'm wondering the following: 1) Assume I'm running simpleFoam. I want to find a steady-state solution. Will I then, using the lines I was given, write a file at every iteration or only after the final iteration? How can I control this to achieve both scenarios? 2) Assume I'm running turbFoam. I'm solving a transient case. Will I then, using the lines I was given, write a file at every time step or only after the final time step? How can I control this to achieve both scenarios? Best regards, Christian mattijs March 22, 2007 03:39 You'll have to code it yoursel You'll have to code it yourself. 1) $FOAM_SOLVERS/incompressible/simpleFoam/simpleFoam.C 2)$FOAM_SOLVERS/incompressible/turbFoam/turbFoam.C Erik March 30, 2007 06:12 Sorry the link didnt lead wher Sorry the link didnt lead where I thought it would. Here is a thread I saved from the forum: Continuity Eq: div(U) = 0 and with source term: div(U) = S Discretised momentum eq. A*U = H - grad(p) -> U = H/A -(1/A)*grad(p) = Ustar - (1/A)*grad(p) H/A is the momentum-predictor value of U, ie Ustar. if we insert this in the continuity eq. it yields div(U) = div(Ustar - (1/A)*grad(p)) = S -> div(Ustar) - laplacian(1/A, p) = S or laplacian(1/A, p) = -S + div(phi) where phi is Ustar evaluated on faces. have a look at icoFoam and you will easily see where to insert S. /Erik christian July 12, 2007 07:34 I'm running parallel and use p I'm running parallel and use p.boundaryField()[patchID] to access the pressure of each wall cell face of a certain boundary. I write this data to a file. However, it ends up in all processor directories, being zero in the partitions not containing my boundary. How do you suggest me to make the code write this data only if the boundary of interest is in the current partition (processor directory)? Best regards, Christian Svensson gschaider July 12, 2007 09:22 Just a guess (havn't looked at Just a guess (havn't looked at the source). As far as I remember the probes-functionObject collects all the data on the first processor and writes it there. I guess that would be even more convenient for you. Have a look at that source, it might give you some hints. suredross June 10, 2008 06:26 hi all, i want to check that hi all, i want to check that mass is conserved in my simulation?so i want to get the data for my inlet and outlet.can someone please help here. cheers davey ngj June 10, 2008 07:05 Hi Davey Without trying it, Hi Davey Without trying it, I believe something like this should work: scalar checkMass(0.0); forAll(phi.boundaryField(),patchID) { checkMass += sum(phi.boundaryField()[patchID]); } Info << "checkMass: " << checkMass << endl; Best regards, Niels suredross June 10, 2008 07:18 hi all, managed to fix it.sor hi all, managed to fix it.sorry i was a bit impatient in scanning the forum! thanks anyway davey All times are GMT -4. The time now is 21:13 .
https://www.cfd-online.com/Forums/openfoam-paraview/61140-postprocessing-specific-boundary-print.html
CC-MAIN-2017-13
refinedweb
1,393
75.1
Silverlight TV 46: What's Wrong with my WCF Service? - Posted: Sep 23, 2010 at 9:01 AM - 41,408 Views - 19 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Right click “Save as…” WCF is an integral part of the communication stack for Silverlight applications, but sometimes things go wrong, very wrong. How do you fix those issues. In this episode of Silverlight TV, Yavor Georgiev from the WCF and Silverlight team shows you how to identfy the problems with your WCF services and how to fix them. He covers several topics including these: trying to connect to a service that gives me the error... Warning 2 Custom tool warning: Contract 'Eval' is not compatible with Silverlight 4 because it contains one or more operations with SOAP Encoding (use='encoded'). Only the non-encoded operations will be generated. Using Fiddler, it appears the service is setup on a Apache=Coyete/1.1 CJServer/1.1 server. Since the service is not created on my SL Web project, how and what file do I need to change on the server to make it compatible with my Silverlight project? Thanks. Great video! Definitely helps locating the small issues that can often lead to hours of searching. Hey Now John, Great Vid! Yavor Georgiev was good. Thx 4 the info, Catto @shaggygi - encoded is not supported in Silverlight at this point. If you don't control the service, you could consider building an intermediary WCF service that serves as a relay between Silverlight and the third-party service. If you are interested in updates on WCF support in Silverlight, follow me on Twitter at @digthepony. Cheers, -Yavor @yavorg: Thanks for the update. Hopefully there will be support for this in near future versions of SL. Can we get the Silverlight debugging info link that was promised in the video? You can also use a dot after localhost (e.g.) in order to be caught by fiddler. @jthompkins: yes, I am uploading those links now. My mistake! Great video..just wanted to know where we can get the links that were mentioned in the video for additional resources like seeing bindary data in fiddler etc... Thanks @Jatin Mehta: I just updated the post above with those links. Sorry for the delay. Yavor and I are planning a follow up epsiode to get into authentication. Any ideas of things you really want to see Yavor do? Hi, Great video and now I see the links here that I didn't on the original page. Thanks! Amy chance of show driving the creation of using a WCF service with TDD? The complications of the Async behavior for using SL unit tests is something that I haven't seen a demo on to actually do an integration test that is a vertical slice. Keep up the great work. These demos are really helpful to the dev community. regards, Bill This was really interesting, it would be great if you could do a video at some point on how to handle a lnd log errors on the service side, e.g. With custom behaviours. I've sort of got this working but had to use an attribute on the web service class as I couldn't work out how to put it in the service configuration. @jopapa I look forward to the authentication episode: I usually find this the hardest part of each new silverlight project.. It would be great to see some info on how to configure the service side for authentication and also some best practices for tying in authorisation. And what bits should happen service side versus what should happen client side. Keep up the good work, these videos are really useful. John you are doing great efforts on silverlight tv, Iam working on RIA service since RC release and espacially i love the interaction of MVVM with Silverlight is awesome John please do some depth videos on MVVM with RIA service if possible.Thankx. ,and one last thing i want to say why don't you call any vb person in your show....... I sorry, I didn't quite catch how to do the 'one line exception handler', when i copy the line into my project, i get the error "WebRequestCreator does not exist in the current context. There were some key stokes that i did not follow, can you please clarify that step ! thanks. Hi John I would like to add my thanks to many others for your work on Silverlight, which I have found so useful. WRT this video, I am using RIA services and would like to use a custom Fault to inform a Silverlight client about a data concurrency error. However the RIA Services domain service code seems to have a generic DomainServiceFault baked in to the generated code. Also, whatever kind of exception is raised in the WCF service a DomainException is raised on the Silverlight Client. Do you have any RIA service specific advice for when we want custom faults? Thanks... Anthony Hi John and Yavor, This now really was an instructive and informative video. Finally I get Fiddler to cooperate with my localhost and/or the development server on any desired port. A a result of this I think I was able to find out why my service was not responding to requests. Fiddler says: No Proxy-Authenticate Header is present No WWW-Authenticate Header is present. So far so good, but WTH do I do to get rid of that? Peter @ken: Add the namespaces using System.Net; using System.Net.Browser; A very late reply, but often there's someone looking for answers to these things afterwards. Hi, the video is not working! Remove this comment Remove this threadclose
http://channel9.msdn.com/Shows/SilverlightTV/Silverlight-TV-46-Whats-Wrong-with-my-WCF-Service?format=smooth
CC-MAIN-2015-18
refinedweb
973
71.55
THE MYSTERIOUS MR. HOME Strange Career of a Queer Young Man of Connecticut Who Puzzled Scientists and Amazed the People of Two Continents SO you've brought the devil to my house, have yon?" , "No, no, .•.:: iv, no ■ It's not my fault. Ifitb an angry gesture!; the -woman, tall, large boned, harsh visaged; pushed back her chair and adtamoed threateningly toward the pale, anemic tooting youth of seventeen!! who sat cowering at the Jarendof the breakfast table. -t •You know this is y« ur doing. Stop it at once! lie ether gazed helplessly about him, while from cmv side of the r ian came a volley .•! raps and knock*. "It is n< tmy doing," he muttered. "I cart help it." "Begone then! ( *ut of my sight!" Left to herself and to "silence. — for with her MjSwr's departure the noise instantly ceased. — she fell into gj<« my meditation. She was an ex ceedingly ignorant," but a profoundly religious, v.oman." She had beard much of the celebrated Fox sisters, with tales of whose strange actions in Jhe'soghboring State of New York the countryside «as then ringing, and recognized, or imagined she recognized, a s : rilring similarity between their performances and the tumult of the last few min ae& I: was her Srrn belief that the Fox girls were rictsns of den* niac influence, and no ess surely d:d she deem it ■ ■•■-> ible to attribute the recent disturbance to human agency. Her nephew was not given io"practi< ': jokes; there had been nothing ansaal in his mai er; he had greeted her cheerily as usual, and qui< t! , taken his seat. But with his adroit, and she h'uddered at the remembrance, the knocking? ' begun^ penetrating, frightful. There could be < :.'v one explanation: the boy, however mra-xtting! had placed '..-.•]: in the power of the d<- . . \Vhat to do, however, she knew sot. and famed ant fretted the entire morning, until Hi his reappearuii c at noon the knockings broke cot again.' Then r mind was quickly made up. Called on the Ministers T ( XjK you!" she to him. "We must rid you of the evil J I . sin you . I shall have the minis ters reason with • i and pray for you, and that at ence." True to her v rd. the despatched a messenger to the three clef} en of the little Connecticut vil l-?e in which f] made her home, and all three promptly resjx :.■ i tci her request. But their visits i-r<d their prayt proved ... Indeed, the ~j<re they pra"y< i el< uder the knocks became; and presently, "to their astonishment and dismay, tie very fornifure appeared bewitched, dancing adkaping as - h alive. "Verily," said one to Me irate aunt; ' the boy is possessed of the devil. •io make xnattei worse, the neighbors^ hearing of TO weird occum •-. besieged the house lay and Eight, their ■ ■ . whetted by a report that, ex a«33y as in thi -• ■'• ■: the F<>x sisters, oorhmunica p'.'nsfrorn the <'.■ were IK.-ing received through the raoddags. Inn ible as it seemed, this report load Bpeedy c tion. Before the week ■■■•■'" oatheladi tnt: "■ Lt<t night | ! c came raps t< ■me sj x.-lling v. < >rds, and they brouj . me .* message from the spirit of J 3 ? mother." [Jod-R-hat -. the message?" Hvjaothei > spirit said :<• me, 'Daniel, fear n<>t, child. G«d •. ith you. and who shall be against Seek to <";.. good! Be truthful and truth lov -■?■■ and you •■ : .; prosper, my child. Yours is a EtaQoosmissicM . -■•■.: »-jll convince the infidel, cure '•* sick, and ■ i i ■ v the weeping.*! 1 . A glorious :«•:.:" mocked the aunt, her jr*&°e Wterly . schausted. "A glorious mission. --**fcy3 and : eive, t<> plague and torment! " '.?}• * I " %v ay. and darken my doors no :n<>r<--!" ..w° -• V)U s»«an this; aunty r" •v., , rM "• pank-lr Never shall it be said of me I gave ,; _. ,i cohort to Satan or child of - Uomc'c Involuntary Start I. cas-way ras Daniel Dunglas Home launched on '>:,' d Gc ? €r thal was to prove one of the most mar ■ ." l -'' m(Kt '^'rvelous, in the annals of ■ ■ to'a^^f 011 But ;t the tiinc ' there was ho reason tv*" 1 " s'^5 '^- lhe remarkable achievements that -'' Z^ beW m store for him. He was fitted for is Sir*- Ever sine* his aunt had adopted him fxaer** 11 * Sootl^nd, - here he was born of obscure fcDttwS! in IS iS-> be had led a life of complete U Ci not "'^">'« ther cheerless^ but deadening fcvkoiN^' and h<: '» terribly for the '•asui mik;Ti^ h;^ ■■••••• in the world. His health °« WS ; h V^ s"">^ts were empty; he was with r>ji4j . jr*. up<»n .'r.s <.v.n resources tan<i<--r iii'vrt.? vms ' «t seemed only t«^» probable that Ivothw an tv - rl --' ' ] '- ;:til ••• <lU^ be his portion: i& satiiwi" S <JI '' V vtTe :n his fav " r - Tbe fir£t ■•' as ffite^. n;:n " t:<;: ' and optimism: the second, f^jn^n'"' t r -"' : '"'' 1 b y published rej»orts of the *^ wracn had ted to his expulsion from his By H. ADDINGTON BRUCE aunt's house Already, though only a few days had i ■ nee the knockings were first heard, the story great publicity, greedily devoured by an evenvidenii : ■ • willing to regard su< h bapp, ■ - . dence o( the intervention o) ad in the aiTaii ol I g. It was, H must ered, an era ■ I ■ id* pread enthu lasm d . the i.. -.].:■ peri< .1 oi spiritualism. on, therefore, as ii became known that young erty to i ten he would, invita ered on on < from the nearby town ol nd thither Home journeyed in the .... , ,B s i. It was determined thai an , aid i., madeto demonstrate hismedium ... c tilting proo then coming into , result exceeded .. .,,!„ , to an eye . . • ■ , . • only mo • I al contact, but on request turned iwlt up ide i ■ : ercame a spectators effort to pi ■ True, when this spectator d held it with -.11 his strength - . • , -did n < • ;..■ c so freely as before , and !!■ me'i fame mount* •! ■•] a< c. . traveled, to Id ngseanc* ,at ■ •, • ■. : ... counts are to be believed, . . . ■ . -l supernatural power fai and other of the i ■ us mediums . time springing up throughout the •, ,i. , occasion • are told, tlit . t«-(l : through him the where about ■ ■••■:•• 'I to .. tr.i. t ..l land then in litigation; on another, the) enabled him t«> : an invalid foi whom no •'1 ,iii<l time aftei i ime they ed to thos« : eance room nu i •■- • import 1- ide vou< hi afing to physical" phen< unena of the greatest variety. ibk was the lua t that the 1 1 Drawing by Joseph Clement Coll young medium steadfastly refused to accept payment for his services. "My gift, he would solemnly say, "is free to all, without money and without price. 1 have a mission to fulfil, and to its fulfilment I will cheerfully give my life." Naturally this attitude of itself made for converts to the spirit ualistic beliefs of which he was such an apt exponent, and its influence was powerfully reinforced by the result of an investigation conducted in the spring of iS;2 by a committee headed by the poet William Cullen Bryant and the Harvard professor David G. Wells. Briefly, these men declared in their report that they had attended a stance with Hume abso lutely certain that they had not been "imposed Upon or deceived.". A Tremendous Sensation THE report, to be sure, did not specify what, if any, means had been taken to guard against fraud, its only reference in this connection being a. statement that "Mr. 1). I). Home fre quently urged us to hold his hands and feet." But it none the less created a tremendous sensation, public attention being focused on the fact that an awkward, callow, country lad had suc cessfully sustained the scnitiny of !:..•; of "learning, intelligence, and high repute. Xo longer, it would seem, could there be doubt of the validity of his claims, and greater demands than ever were made on him. As before, he willingly responded, adding to his re pertoire, if the term is permissible, new feats of the mosi startling char acter. Thus, at a Stance in New York a table on which a pencil, two candles, a tumbler, and some papers had been placed, tipped over at an angle of thirty degrees without disturbing in the slightest the position of the movable objects on its surface. Then ..t the mediums bidding the pencil was dislodged, rolling to the floor while the r, | remained motionless; and afterward the tum \ little later occurred the first of Home's levita tions, when, at the house oi a Mr. Cheney in South Manchester, Connecticut, he is said to have been lifted without visible means of support to the ceil ing of the seance room. To quote from an eve vntness' narrative: "Suddenly, and without any expectation on the pari of the company, Mr Home • iken up in the air. 1 had hold oi his feel .a the time, and I and others felt his feet— they were lifted a fc* t from the floor. . . . Again and again taken from the floor, and the third tune he arried to the lofty ceiling oi the apartment, with which his hand and head came in gentle con tact ' \ far cry, this, from the simple raps and knocks that had ushered in his mediuinship. Now hoy ever an event occurred which threatened to'cut short alike his "mission" and his life. Never , j n bust health, he fell seriously ill oi an affection that developed into tuberculosis. The medical men whom he consulted unanimously declared that his onh hope lay in a change oi climate and taking alarm his spiritualistic friends generously subscribed a large sum to enable him to visit Europe. Inci dentally, no doubt, they expected him to serve as , nar) i the new faith. And it may be said at once that in this expectation they were not de ., ,j Mo one c. i labored more earnestl) and , fuiij in behali of spiritualism than did ; Dunglas Horn. From the moment he set foot ,, the ■ how oi England in April, 1855; and 1 ne in all the history oi spiritualism achieved such in dw idual renown, not in England alone but in almost every countr) of the Contineni It . from this point that the inyi tery oi his - ..n ■< r xml | txt
http://chroniclingamerica.loc.gov/lccn/sn83030214/1908-07-05/ed-1/seq-31/ocr/
CC-MAIN-2014-49
refinedweb
1,758
82.65
Improving performance Redirecting to cached data { toIdValue } from 'apollo-utilities'; import { InMemoryCache } from 'apollo-cache-inmemory'; const cache = new InMemoryCache({ cacheRedirects: { Query: { book: (_, args) => toIdValue(cache.config.dataIdFromObject({ __typename: 'Book', id: args.id })), }, }, }); Note: This'll also work with custom dataIdFromObjectmethods as long as you use the same one. Apollo Client) => args.ids.map(id => toIdValue(cache.config.dataIdFromObject({ __typename: 'Book', id: id }))), }, }, Prefetching data Prefetching is one of the easiest ways to make your application's UI feel a lot faster with Apollo Client. Prefetching simply means loading data into the cache before it needs to be rendered on the screen. Essentially, we want to load all data required for a view as soon as we can guess that a user will navigate to it. We can accomplish this in only a few lines of code by calling client.query whenever the user hovers over a link. Let's see this in action in the Feed component in our example app Pupstagram. function Feed() { const { loading, error, data, client } = useQuery(GET_DOGS); let content; if (loading) { content = <Fetching />; } else if (error) { content = <Error />; } else { content = ( <DogList data={data.dogs} renderRow={(type, data) => ( <Link to={{ pathname: `/${data.breed}/${data.id}`, state: { id: data.id } }} onMouseOver={() => client.query({ query: GET_DOG, variables: { breed: data.breed } }) } style={{ textDecoration: "none" }} > <Dog {...data} url={data.displayImage} /> </Link> )} /> ); } return ( <View style={styles.container}> <Header /> {content} </View> ); } All we have to do is access the client in the render prop function and call client.query when the user hovers over the link. Once the user clicks on the link, the data will already be available in the Apollo cache, so the user won't see a loading state.. Query splitting Prefetching is an easy way to make your applications UI feel faster. You can use mouse events to predict the data that could be needed. This is powerful and works perfectly on the browser, but can not be applied to a mobile device. One solution for improving the UI experience would be the usage of fragments to preload more data in a query, but loading huge amounts of data (that you probably never show to the user) is expensive. Another solution would be to split huge queries into two smaller queries: - The first one could load data which is already in the store. This means that it can be displayed instantly. - The second query could load data which is not in the store yet and must be fetched from the server first. This solution gives you the benefit of not fetching too much data, as well as the possibility to show some part of the views data before the server responds. Lets say you have the following schema: type Series { id: Int! title: String! description: String! episodes: [Episode]! cover: String! } type Episode { id: Int! title: String! cover: String! } type Query { series: [Series!]! oneSeries(id: Int): Series } And you have two Views: - Series Overview: List of all Series with their description and cover - Series DetailView: Detail View of a Series with its description, cover and a list of episodes The query for the Series Overview would look like the following: query SeriesOverviewData { series { id title description cover } } The queries for the Series DetailView would look like this: query SeriesDetailData($seriesId: Int!) { oneSeries(id: $seriesId) { id title description cover } } query SeriesEpisodes($seriesId: Int!) { oneSeries(id: $seriesId) { id episodes { id title cover } } } By adding a custom resolver for the oneSeries field (and having dataIdFromObject function which normalizes the cache), the data can be resolved instantly from the store without a server round trip. import { ApolloClient } from 'apollo-client'; import { toIdValue } from 'apollo-utilities'; import { InMemoryCache } from 'apollo-cache-inmemory'; const cache = new InMemoryCache({ cacheResolvers: { Query: { oneSeries: (_, { id }) => toIdValue(cache.config.dataIdFromObject({ __typename: 'Series', id })), }, }, dataIdFromObject, }) const client = new ApolloClient({ link, // your link, cache, }) A component for the second view that implements the two queries could look like this: const QUERY_SERIES_DETAIL_VIEW = gql` query SeriesDetailData($seriesId: Int!) { oneSeries(id: $seriesId) { id title description cover } } `; const QUERY_SERIES_EPISODES = gql` query SeriesEpisodes($seriesId: Int!) { oneSeries(id: $seriesId) { id episodes { id title cover } } } `; function SeriesDetailView({ seriesId }) { const { loading: seriesLoading, data: { oneSeries } } = useQuery( QUERY_SERIES_DETAIL_VIEW, { variables: { seriesId } } ); const { loading: episodesLoading, data: { oneSeries: { episodes } = {} } } = useQuery( QUERY_SERIES_EPISODES, { variables: { seriesId } } ); return ( <div> <h1>{seriesLoading ? `Loading...` : oneSeries.title}</h1> <img src={seriesLoading ? `/dummy.jpg` : oneSeries.cover} /> <h2>Episodes</h2> <ul> {episodesLoading ? ( <li>Loading...</li> ) : ( episodes.map(episode => ( <li key={episode.id}> <img src={episode.cover} /> <a href={`/episode/${episode.id}`}>{episode.title}</a> </li> )) )} </ul> </div> ); } Unfortunately if the user would now visit the second view without ever visiting the first view this would result in two network requests (since the data for the first query is not in the store yet). By using a BatchedHttpLink those two queries can be sent to the server in one network request.
https://www.apollographql.com/docs/react/performance/performance/
CC-MAIN-2020-24
refinedweb
798
55.03
Daydream interactive screen savers are a new feature in Android 4.2 (API Level 17). With Daydream, you can create a screen saver with animation, interaction, and just about anything else you would include in an Android Activity. Daydream screen savers are displayed while the user device is charging. Users with devices running Jelly Bean 4.2 can select and configure a Daydream screen saver in their display settings. In this tutorial we'll go through the required steps to create a Daydream screen saver for Android. The sample Daydream we'll create in this tutorial will display a simple animation featuring the Android robot image used as default launcher icon. The user will be able to stop and start the animation by tapping the robots. The Daydream will display a grid with multiple instances of the rotating robot and a button the user can use to dismiss the Daydream, which will be placed randomly within the grid. In your own projects, you can choose interactive screen saver elements that will reuse the views you have in existing apps, or web views that provide links to your projects. The code in this tutorial is simply intended to familiarize you with the process of creating a functioning interactive Daydream, but these potential applications can enhance user engagement with any existing projects you may have. Tip: Daydream is only available on devices running version 4.2 of Android, which is API Level 17. If you don't have a device running API 17, you'll only be able to test your Daydream apps on the emulator. As long as your Android development environment has version 17 installed, you should be able to create an AVD (Android Virtual Device) on which the Daydream will run. During development, you can create a test Activity in which you place the same content as your Daydream apps. The Eclipse and Android development resources are particularly useful for Activities. 1. Create a Project Step 1 Create a new Android Project in Eclipse. The app will have one class in it, which will be a Service. When you go through the process of creating your new Android project in Eclipse, select API Level 17 as both target and minimum SDK for your project. There's no need to let Eclipse create an Activity or layout file. You won't need one. You can, however, let it create a launcher icon. In the New Android Application "Configure Project" screen, uncheck "Create Activity" but leave "Create Custom Launcher Icon" checked. Choose the default Android robot icon; we are going to use it within the Daydream. Step 2 Open your project Manifest file to configure the app as a Daydream. Include the following code within the Manifest application element. <service android: <intent-filter> <action android: <category android: </intent-filter> </service> This allows your app to be listed in the Daydream options within the user's display settings. You can change the name of the service class "RobotDaydream" if you like, as long as you give your service class the same name when we create it. Add the specified label string to your application's "res/values/strings" file(s). <string name="daydream_name">Spinning Androids</string> This name will appear in the device display settings when selecting a Daydream. Tip: The Manifest code above is complete for a functioning Daydream. However, you can opt to add a metadata element, linking to a resource where you can specify a settings Activity class for your app. 2. Define an Animation Step 1 Before we get started on the Daydream Service class, we'll define an animation set resource. The little robot icons we display will rotate back and forth when the Daydream runs, so let's define this animation first. In your project "res" folder, create a new folder named "animator" and create a new file in it named "android_spin.xml". Inside the new XML file, define the animation. <set xmlns: <objectAnimator android: </set> You can alter the animation properties if you wish. This simply defines an animation that will rotate the target image 360 degrees in one direction over five seconds and back in the opposite direction. It will repeat continuously. 3. Create a Daydream Service Step 1 In your project, create a new class in the main package. If Eclipse did not create a default package when you created the project, add one now by selecting the "src" folder and choosing File > New > Package and entering your package name. Add your new class to the package, naming it "RobotDaydream" or whatever you included in the Manifest service name attribute. Extend the opening line of the class declaration. public class RobotDaydream extends DreamService implements OnClickListener You will need the following imports in the class for the remainder of the code in this tutorial. import java.util.Random; import android.animation.AnimatorInflater; import android.animation.AnimatorSet; import android.graphics.Color; import android.graphics.Point; import android.service.dreams.DreamService; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.GridLayout; import android.widget.ImageView; Step 2 When you extend the Daydream class, there are a number of methods you can override to control the appearance and behavior of your screen saver. Inside your Service class, add the method outline for when the Daydream starts. @Override public void onDreamingStarted() { //daydream started } Now add the method outline for when Daydreaming stops. @Override public void onDreamingStopped(){ //daydream stopped } Add the method outline for when your Daydream is initially attached. You'll include setup tasks for the screen saver. @Override public void onAttachedToWindow() { //setup daydream } Any tidying up from setup will be placed inside the method for when the Daydream is detached, so add that next. @Override public void onDetachedFromWindow() { //tidy up } Lastly, add the method outline for handling clicks. Depending on their functionality, your Daydream apps may not always require all of these methods. public void onClick(View v){ //handle clicks } We will be working with each of these methods. 4. Prepare for Daydreaming Step 1 At the top of your Daydream Service class, add some instance variables that we'll use to implement the animation and interaction. First you'll add a button so that the user can stop the Daydream. private Button dismissBtn; If you set your Daydream up to be interactive, which we'll do here, you'll need to manually implement a way to stop the Daydream. Otherwise, the default behavior for a Daydream is to stop when the user touches the screen. We won't be using the default behavior because we want to allow the user to interact with the views in our Daydream. We'll provide a button that allows the user to dismiss the Daydream instead. Next, add two arrays. private ImageView[] robotImgs; private AnimatorSet[] robotSets; These will store the robot image views and animator sets for them. We'll use a constant to set the number of robots to display, using the number of rows and columns we want in the grid. private final int ROWS_COLS=5; private final int NUM_ROBOTS=ROWS_COLS*ROWS_COLS; Finally, we'll use a random number to determine where the stop button will be placed in the grid. private int randPosn; Step 2 The onAttachedToWindow method gives us the opportunity to carry out setup tasks for the Daydream. Call the superclass method inside the method. super.onAttachedToWindow(); Set the Daydream to be interactive and occupy the full screen, hiding the status bar. setInteractive(true); setFullscreen(true); Step 3 Get a random number to determine where the stop button will be. Random rand = new Random(); randPosn = rand.nextInt(NUM_ROBOTS); Create a grid layout for the Daydream, setting the number of rows and columns using the constant. GridLayout ddLayout = new GridLayout(this); ddLayout.setColumnCount(ROWS_COLS); ddLayout.setRowCount(ROWS_COLS); Initialize the arrays. robotSets = new AnimatorSet[NUM_ROBOTS]; robotImgs = new ImageView[NUM_ROBOTS]; Determine the width and height for each robot image in the grid. Point screenSize = new Point(); getWindowManager().getDefaultDisplay().getSize(screenSize); int robotWidth = screenSize.x/ROWS_COLS; int robotHeight = screenSize.y/ROWS_COLS; Step 4 Now we can loop through the arrays, adding the stop button and robots to the grid. for(int r=0; r<NUM_ROBOTS; r++){ //add to grid } Inside the loop, create some layout parameters using the width and height we calculated. GridLayout.LayoutParams ddP = new GridLayout.LayoutParams(); ddP.width=robotWidth; ddP.height=robotHeight; Check to make sure we are at the index allocated for the stop button. if(r==randPosn){ //stop button } else{ //robot image view } Inside the if block, create and add the stop button to the layout, setting display properties and listening for clicks. dismissBtn = new Button(this); dismissBtn.setText("stop"); dismissBtn.setBackgroundColor(Color.WHITE); dismissBtn.setTextColor(Color.RED); dismissBtn.setOnClickListener(this); dismissBtn.setLayoutParams(ddP); ddLayout.addView(dismissBtn); Inside the else block, create an Image View and set the launcher icon as its drawable, adding it to the layout. robotImgs[r] = new ImageView(this); robotImgs[r].setImageResource(R.drawable.ic_launcher); ddLayout.addView(robotImgs[r], ddP); Inside the else, create and add an animator set to the array, referencing the animation resource we created. You can alter the drawable image to suit one of your own. robotSets[r] = (AnimatorSet) AnimatorInflater.loadAnimator(this, R.animator.android_spin); Set the current robot image as target for the animation and listen for clicks on it. robotSets[r].setTarget(robotImgs[r]); robotImgs[r].setOnClickListener(this); After the for loop (but still inside the onAttachedToWindow method), set the content view to the layout we created and populated. setContentView(ddLayout); 5. Handle Clicks Step 1 Inside your onClick method, find out whether a robot image view or the stop button was clicked. if(v instanceof Button && (Button)v==dismissBtn){ //stop button } else { //robot image } Step 2 In the if block, finish the Daydream. this.finish(); You must implement this for any interactive Daydreams you create (in which you call setInteractive(true)). The user will not be able to dismiss the screen saver the default way. Step 3 Inside the else block, add a loop to iterate through the robot image views. for(int r=0; r<NUM_ROBOTS; r++){ //check array } Inside the loop, make sure that the index is not in the position designated for the stop button. If this is the case, the image view array entry will be empty. if(r!=randPosn){ //check image view } Inside the if block, find out if the current view is the one just clicked. if((ImageView)v==robotImgs[r]){ //is the current view } Inside this if, stop or start the animation depending on whether it's currently running. if(robotSets[r].isStarted()) robotSets[r].cancel(); else robotSets[r].start(); This lets the user turn the animations on and off for each robot image view. Now that we've responded to the click, break out of the loop. break; 6. Starting and Stopping Daydreaming Step 1 In your onDreamingStarted method, call the superclass method. super.onDreamingStarted(); Next, loop through the animations and start them. for(int r=0; r<NUM_ROBOTS; r++){ if(r!=randPosn) robotSets[r].start(); } Step 2 In your onDreamingStopped method, stop the animations. for(int r=0; r<NUM_ROBOTS; r++){ if(r!=randPosn) robotSets[r].cancel(); } Now call the superclass method. super.onDreamingStopped(); Step 3 In the onDetachedFromWindow method, we can get rid of anything we set up in onAttachedToWindow. In this case we'll just stop listening for clicks. for(int r=0; r<NUM_ROBOTS; r++){ if(r!=randPosn) robotImgs[r].setOnClickListener(null); } Finally, call the superclass method. super.onDetachedFromWindow(); Conclusion This completes the basic Daydream app! You can test yours on the emulator or on a real device running Jelly Bean 4.2. Browsing to the Settings > Display section of the device menu will let you select the Daydream from the available list. Once it's selected, you can choose "Start Now" to see it in action. In your own projects, you can implement pretty much the same interactive and informational functions you would in an Activity, so you can use Daydreams to provide engaging access points for your existing apps. However, be aware that if your screen saver is using too much of the available processing resources, the Android system will stop it from running to allow the device to charge properly. Other options to explore in your Daydream apps include setting the screen brightness and providing a settings activity for user configuration. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/android-sdk-create-an-interactive-screen-saver-with-daydream--mobile-16604
CC-MAIN-2016-44
refinedweb
2,080
56.76
Hi. im half new with Blender and creating a game. I am familiar with python basics, but i need more information, which i cant find… Please help me and answer to my questions with simple examples, ty! I want to have everythink with ONE sensor “Allways” -> Python… - How to add Object as “owner”? import bge cont = bge.logic.getCurrentController() scene = bge.logic.getCurrentScene() sensor = cont.sensors[“sensor”].positive own = cont.owner if sensor: scene.addObject(“Sphere”, own) it works, but says, that script is not subscriptable. i want to find out what is the problem… i know, that it would say no problem with the script: scene.addObject(“Sphere”, “Cube”) but i want Cube to spawn and have no actuator. Where is the problem here? TY - How to become Message by Python? i dont want o have the sensor “Message” and i dont want to use Bool or else… i just want simply have in my scripts somethink like: if get.Message(“a”): own[“prop”] += 1 what would be the example for working script? cant find anywhere… need serious help… TY - How to make Delay with python? I want just simple instead of Sensor have everythink in python, somethink like: if own.delay(5): own[“prop”] += 1 what would be the example for working script? cant find anywhere… need serious help… TY How to make Collision with python? somethink like: if collision(“OBCube”): own[“prop”] += 1 How to make in python Mouse and MouseOver sensors? somethink like: if MouseOver(“OBCube”) and MouseLeftButton: own[“prop”] += 1 How to make Random with python? I had it befor 3 years but lost and cant find a working simple example… please remind me how to do with that… and without sensor or actuator as “Int -> Uniform” example in Blender Blocks… import Random Rand(1, 5) // Random from 1 to 5 if Rand == 3: own[“prop”] = 1 elif Rand ==5: own[“prop”] = 2 - How to make Visibility without actuator, just in python? if sensor: own.Visibility = TRUE How to make motion actuator without enabled actuator, only with python? From beginning to finish? that should be a longer script as i remember… sadly i lost all my information cause had all in megaupload site xD Is it possible to bind python to Parented OBject? So i spawn cubes, parented with spheres, and i want that python on Cube would recognize propertie changes on spheres… it would help me ALOT on my game… somethink like: IF parented.OBSphere[“prop”] == 1: own[“prop”] = 2 TY all for answers… Please help me reminding the Blender Python programming, becouse i want to create my game and i need simplier, easy understanding and faster way to add new enemies or other objects and with Python, without actuators or sensors, is allways easy to find out all variables, which i need to change for new enemy… My game is like: i spawn enemy, hit him with bullets, enemy has variable health, defence, evasion, damage etc etc… (it varies how much time the game already runs, own items, variable stats etc etc…), so i have now like 30 sensors and actuators and i need to make a new enemy and its like 1 hour to add so many sensors and actuators and check which where goes, allways making much mistakes there and hard to progress… Need improve my python scripting
https://blenderartists.org/t/9-simple-questions-about-python/589231
CC-MAIN-2021-10
refinedweb
557
69.21
imports are case-sensitive, so you probably have to do ImportModule("Hello"). -Brian On 2/8/08 10:16 AM, "Charles Chiu" <charles5557 at yahoo.com> wrote: > Hi Everyone, > I just found out python for .net yesterday and > would be interested in knowing about this. I have > tried to test it myself however, i couldn't figure out > what is wrong with the code i wrote. I am trying to > invoke a simple method hello() from Hello.py > here is the code i wrote: > > namespace PyWrapper > { > class Program > { > static void Main(string[] args) > { > PythonEngine.Initialize(); > PyObject sys = > PythonEngine.ImportModule("sys"); > string actual_path = > sys.GetAttr("path").ToString(); > Console.WriteLine(actual_path); > string path = "my path to the py file"; > > PyObject hello = > PythonEngine.ImportModule("hello"); > // PyObject hello remins null after the > line above which i suspect that is the reason the next > line crashes. > string fromPython = > hello.InvokeMethod("hello", new PyTuple()).ToString(); > PythonEngine.Shutdown(); > } > } > } > > Can someone give me some suggestion on what kind of > problem this could be? thanks > > > > ______________________________________________________________________________ > ______ > Looking for last minute shopping deals? > Find them fast with Yahoo! Search. > > _________________________________________________ > Python.NET mailing list - PythonDotNet at python.org > -------------------------- Brian Lloyd brian.lloyd at revolutionhealth.com
https://mail.python.org/pipermail/pythondotnet/2008-February/000776.html
CC-MAIN-2021-39
refinedweb
196
62.44
Doxylink is a Sphinx extension to link to external Doxygen API documentation. It allows you to specify C++ symbols and it will convert them into links to the HTML page of their Doxygen documentation. You use Doxylink like: :polyvox:`PolyVox::Volume` You use :qtogre:`QtOgre::Log` to log events for the user. :polyvox:`PolyVox::Array::operator[]` Where polyvox and qtogre roles are defined by the doxylink configuration value. Like any interpreted text role in Sphinx, if you want to display different text to what you searched for, you can include some angle brackets <...>. In this case, the text inside the angle brackets will be used to match up with Doxygen and the part in front will be displayed to the user: :polyvox:`Array <PolyVox::Array>`. :polyvox:`tidyUpMemory <tidyUpMemory(int)>` will reduce memory usage. Note In C++, it is common that classes and functions will be templated and so will have angle brackets themselves. For example, the C++ class: PolyVox::Array<0,ElementType> would be naively linked to with Doxylink with: :polyvox:`PolyVox::Array<0,ElementType>` but that would result in Sphinx parsing it as you wanting to search for 0,ElementType and display PolyVox::Array as the text to the user. To avoid this misparsing you must escape the opening < by prepending it with a \: :polyvox:`PolyVox::Array\<0,ElementType>` If you want to use templated symbols inside the angle brackets like: :polyvox:`Array <PolyVox::Array<0,ElementType>>` then that will work without having to escape anything. For non-functions (i.e. namespaces, classes, enums, variables) you simply pass in the name of the symbol. If you pass in a partial symbol, e.g. `Volume` when you have a symbol in C++ called PolyVox::Utils::Volume then it would be able to match it as long as there is no ambiguity (e.g. with another symbol called PolyVox::Old::Volume). If there is ambiguity then simply enter the fully qualified name like: :polyvox:`PolyVox::Utils::Volume` or :polyvox:`PolyVox::Utils::Volume <Volume>` For functions there is more to be considered due to C++’s ability to overload a function with multiple signatures. If you want to link to a function and either that function is not overloaded or you don’t care which version of it you link to, you can simply give the name of the function with no parentheses: :polyvox:`PolyVox::Volume::getVoxelAt` Depending on whether you have set the add_function_parentheses configuration value, Doxylink will automatically add on parentheses to that it will be printed as PolyVox::Volume::getVoxelAt(). If you want to link to a specific version of the function, you must provide the correct signature. For a requested signature to match on in the tag file, it must exactly match a number of features: The argument list is not whitespace sensitive (any more than C++ is anyway) and the names of the arguments and their default values are ignored so the following are all considered equivalent: :myapi:`foo( const QString & text, bool recalc, bool redraw = true )` :myapi:`foo(const QString &foo, bool recalc, bool redraw = true )` :myapi:`foo( const QString& text, bool recalc, bool redraw )` :myapi:`foo(const QString&,bool,bool)` When making a match, Doxylink splits up the requested string into the function symbol and the argument list. If it finds a match for the function symbol part but not for the argument list then it will return a link to any one of the function versions. When generating your Doxygen documentation, you need to instruct it to create a ‘tag’ file. This is an XML file which contains the mapping between symbols and HTML files. To make Doxygen create this file ensure that you have a line like: GENERATE_TAGFILE = PolyVox.tag in your Doxyfile. The environment is set up with a dictionary mapping the interpereted text role to a tuple of tag file and prefix: doxylink = { 'polyvox' : ('/home/matt/PolyVox.tag', '/home/matt/PolyVox/html/'), 'qtogre' : ('/home/matt/QtOgre.tag', '/home/matt/QtOgre/html/'), } A boolean that decides whether parentheses are appended to function and method role text. Default is True. If you find any errors, bugs, crashes etc. then please let me know. You can contact me at matt@milliams.com. If there is a crash please include the backtrace and log returned by Sphinx. If you have a bug, particularly with Doxylink not being able to parse a function, please send the tag file so tat I can reproduce and fix it.
https://pythonhosted.org/sphinxcontrib-doxylink/index.html
CC-MAIN-2022-27
refinedweb
740
59.43
Migrate Website Joomla 2.5 to Joomla 3 and change video component Budget $250-750 USD Hello, I need a freelancer for following job: - Migrate website from Joomla 2.5 to Joomla 3 - Template adaptation - Change video component: from "Videoflow" to "Yendif Video Share" - 1'200 videos (categories and descriptions too) to adapt with "Yendif Video Share" 11 freelancere byder i gennemsnit $663 for dette job Hello I am ready to do that [Samples of what I did before close to what you need] [My Data] Thank you Hi, I can do your job but i need to talk before moving forward on this job. Let me know if this possible for you to talk few minutes. Regards Rina Hello Sir, Please open a PM to discuss it in detail. We need few details before going further. Looking forward to your reply. Thanks & Regards.
https://www.dk.freelancer.com/projects/php-mysql/migrate-website-joomla-joomla-change/
CC-MAIN-2017-34
refinedweb
144
68.1
Item Creator Extension is under development and is not ready to use yet! Description Item Creator is a Unity extension that enables user to easly create his items data and save it to json file (there are plans for creating extension that will enable to browse already created items and delete them (base CRUD operations)). Model code This is how you express your model in code ( BaseModel class is delivered with plugin and it is required that your model class inherits from it) Inspector look After you create model just drag and drop script on proper object field and rest fields will generate automatically based on model class properties. After you hit Create item button Item Creator will generate json representation of that model. For now it is not saved anywhere yet. Development I am using Unity 2017.1 along with Visual Studio 2017 Community Edition with experimanetal C# 6 support (I am using string interpolation inside). Usage First you need to create a script that will include public class which will be [System.Serializable] (this is necessary in order to be able to read class properties). Then you just need to drag a script into the specified field and vuala. In order to create item and append it to your destination file click Create item button.
https://unitylist.com/p/th/Unity-item-creator
CC-MAIN-2019-22
refinedweb
217
60.14
#include <hcs12dp256.h>#define RDRF 0x20 // Receive Data Register Full Bit#define TDRE 0x80 // Transmit Data Register Empty Bitvoid init_SCI1(void){ SCI1BDH = 0; SCI1BDL = 156; // baud rate to 9600 SCI1CR1 = 0x20; // 8-N-1 SCI1CR2 = 0x0C;}void init_PB(void) // Setup LEDs for data verification{ DDRB=0xFF; // Output - Port B (Specifies Direction) PORTB=0x01; // Enable Port B0 DDRH|=0x00; // Input - Port H - Direction of Switches DDRJ|=0x02; // Output - PJ1 - Sets an output on Pin 2 PTJ&=0xFD; // LED Enable - Forces 0 to Pin 2}main(void){ init_PB(); PTJ &= 0xFE; init_SCI1(); while(1) { while((RDRF & SCI1SR1)== 0){}; // RDRF never goes high ??? PORTB = SCI1SR1; // Trying to display the output on LEDs }} I'm using a mc9s12dp256 on a Dragon 12 board. This is my code for the receiver. It's just quick and dirty just to receive some data. This is my first attempt at SCI communication. I have verified that the transmitter is sending 10 bits of data (start,data,stop). It is running through the rs485 in single-line mode. The RDRF bit never shows that the SCI1DRL register is full. Any insight would be super helpful. Thanks, Aaron W.
https://community.nxp.com/thread/29024
CC-MAIN-2018-22
refinedweb
189
62.98
Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ? Hi, I'm porting a java program in C++/QT and I have to migrate a cache designed as a WeakHashMap<MyElement, WeakReference<MyElement>> I think what I'd really try to achieve is a QHash<MyElement, QWeakPointer<MyElement>> that removes the entries from the QHash when their value (the weakptr) gets null. This structure is kind of looking strange to me and I'm not really sure if this could work as expected... Basically the app has several main objects (let's call them MainObject) that can manipulate some MyElements : create some, do some operations with them to eventually store in a stack (the MainObjects have each their own different stack) one particular MyElement and delete the other ones. It is those saved value that I want to share between the MainObjects. When no more MainObject uses a shared MyElement, we would like to remove it from the cache. As it sounds around sharing an object, I'm thinking maybe there is more QT way doing that using QSharedDataPtr but I don't really see how... The MyElement contains itself a QMap and has a provided hash function that I'm supposed to reuse but collisions may happen, that is why they're using a copy of the object itself as a key. Would it be more efficient maybe to generate a unique QString from a particular MyElement and use rather that as a key? The number of elements of the cache may grow huge. On a mini test of 1min with only 2 MainObjects, it went up to 35k, some bigger test can run hours with 100s of objects, so I think we might have quite a lot of entries. I'll try to get a figure, must be several 100k, might be more... Any recommendations for such design? Cheers - jsulm Moderators @mbruel "that removes the entries from the QHash when their value (the weakptr) gets null" - QHash does not remove anything, its you who needs to remove elements which are not needed anymore. "the MainObjects have each their own different stack" - what do you mean by that? Objects do not have own stacks. "create some, do some operations with them to eventually store in a stack" - be careful with pointers to objects allocated on the stack! @mbruel "that removes the entries from the QHash when their value (the weakptr) gets null" - QHash does not remove anything, its you who needs to remove elements which are not needed anymore. yes I'm planing to implement it myself. Either directly from the destructor of the MyElement or with a cleaning thread.... but I'd like some feedback on such structure, I'm not so fond of it and not sure it would be the way to go... "the MainObjects have each their own different stack" - what do you mean by that? Objects do not have own stacks. well by stack I was referring to std::stack or any kind of container. basically the MainObject will have 2 "stacks" of input/ouput MyElements. As I need to save the timestamp of those, it will probably be 2 MultiMaps. So I'm thinking to have something like this: class MainObjet { QMultiMap<double, QSharedPtr<MyElement >> inputs; QMultiMap<double, QSharedPtr<MyElement >> outputs; }; The QWeakPointer in the cache I wish to implement would refer to one (or more) QSharedPtr. Sometime I remove a QSharedPtr<MyElement > from the inputs (QMultiMap) and delete it. I wish that in that case, if it wasn't shared, so if the WeakPtr is null, I could remove it from the cache. What do you think? - kshegunov Qt Champions 2017 Does MyElementderive from QObjector is it some other kind of class? Can it be copied? Is it a polymorphic type? @kshegunov MyElement doesn't have to inherit from QObject. I guess I could make it if there is a good reason. Are you thinking about connecting to the destroyed signal on deletion? it's structure is simple, it would be mainly this: class MyElement{ public: enum Properties : ushort {p1,... p500}; private: double _mass; QMap<Properties, double> _composition; }; its hashing function would be a numerical computation using both the _mass and every entry of _composition (key and value). The goal is to limit the memory usage, the current simulation in Java can eat up to more than 8GB of ram. I suppose they used this approach to decrease this usage but I don't know how I could check at one point some MyElement are shared. The problem I see with using a QHash<MyElement, QWeakPointer<MyElement>> is that I will store 2 instances of the object, one as a key and the other one in the heap shared by the other objects.... @mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?: The goal is to limit the memory usage, the current simulation in Java can eat up to more than 8GB of ram. Java is a memory hog to begin with. MyElement doesn't have to inherit from QObject. I guess I could make it if there is a good reason. Are you thinking about connecting to the destroyed signal on deletion? Yes, that's what I was thinking about, but not needed in this case. Use implict sharing for the MyElementclass and pass it by value everywhere. ... and please, for the love of god, store QMap<Properties, double> _composition;as a vector or at the very least as a hash. Do you really want a tree rotation with each insert? PS. With that a simple memory requirement you don't even need any sharing, the containers are implicitly shared by default so you're okay just passing the object around by value. This post is deleted! @kshegunov said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?: Java is a memory hog to begin with. well I hope to have better performance in C++, I don't know yet how much better I will get... Use implict sharing for the MyElement class and pass it by value everywhere. so you mean make MyElement derive from QSharedData. What do I store in the Cache? still a QHash<MyElement, QWeakPointer<MyElement>> ? this wouldn't be by value... The thing is that the MainObjects would not necessarily pass the MyElement to each other, they may create two similar instance and the goal of the Cache is to give them the same one. When they store in their "stack" (inputs or outputs multimap) a MyElement, they ask to the cache if it has one already, to get the shared handle. I don't know if I'm really clear, do you see what I mean and the potential issue with implicit sharing. it seems to me what I really want is to share a Pointer on a particular instance. Plus with implicit sharing, how could I take out an entry from the cache if no MainObjects are using it anymore? PS: and please, for the love of god, store QMap<Properties, double> _composition; as a vector or at the very least as a hash. Do you really want a tree rotation with each insert? well I don't have many properties in general in the map, maybe between 3 and 10 max. but I don't know which ones, there are around 500 possibilities. I won't do any insertion once it is set up. I guess I could use a Hash instead, is it really worth it? I may need to iterate it in a sorted manner. a QVector of QPair would be a bit an hassle... after some discussion with colleagues I think to reproduce the java WeakHashMap, I need to do something use a wrapper on a QWeakPointer<MyElement> as the key of my QHash that will have a hash function on the value of the object and the equal operator too on the value. Something like this: class MyElementWeakPtrWrapper{ QWeakPointer<MyElement> _weakPtr; public: bool operator==(const MyElementWeakPtrWrapper& other) const { // check if weakPtr are null (same state) bool isNull = _weakPtr.isNull(), otherIsNull = other._weakPtr.isNull(); if ( (isNull && !otherIsNull) || (!isNull && otherIsNull)) return false; if (isNull && otherIsNull) return true; // both are not null, lets get a sharedPtr QSharedPointer ptr = _weakPtr.toStrongRef(), otherPtr = other._weakPtr.toStrongRef(); isNull = ptr.isNull(), otherIsNull = otherPtr.isNull(); if ( (isNull && !otherIsNull) || (!isNull && otherIsNull)) return false; if (isNull && otherIsNull) return true; // Both sharedPtr are not null return *ptr == *otherPtr; } }; So my QHash would be: QHash<MyElementWeakPtrWrapper, QWeakPointer<MyElement> > WeakHashTable; I can then store the key of the Hash in the MyElement (value) inside the table so when it will be destroyed, it emit a signal with that key (MyElementWeakPtrWrapper) that my WeakHashTable will catch to remove the corresponding values that are null. What do you think of this approach? Under what circumstances would a MainObject share a MyElement instance? The way I understand, the intention is that if a MainObject needs a certain MyElement, it first checks in the cache whether it exist already, otherwise it creates it and adds it to the cache. If I got this right, my caching approach (off the top of my head) would be something like this: class Cache { public: // Return item from cache if it exists, otherwise create, add it to cache and return // MyElementKey is some way to uniquely identify the MyElement the caller wants std::shared_ptr<MyElement> obtainElement(const MyElementKey& key); // Call periodically to eliminate elements which have turned nullptr void collectGarbage(); private: QHash<MyElementKey, std::weak_ptr<MyElement>>; }; @Asperamanca You got the approach right. My MainObject creates and uses some instances of MyElement and when they need to store one that is interesting, we pass through the CACHE in order to centralize only one Instance between all the MainObjects that would need to store the same value. The problem is that the Key is the value of the MyElement itself. That is why I thinking to use a wrapper on QWeakPointer that would have its qHash based on the value of the weakpointer and idem for the equal operation. You see what I mean? Yeah I thought to have a periodical cleaning function, but it would force me to iterate through all the items (key, values) of the QHash. It sounds more efficient to connect to the destructor signal and get back the specific keys that would have some null values. What do you think of this approach? You're overengineering a solution to a problem that you don't have. Instead of doing a copy of a double, a pointer and an integer (what your class has), you are going to invent a million pointers to share data that's already either too small to be shared effectively or already shared. That's what I think. The point of rewriting code is not do duplicate it in another language, but rather redesign the parts that were badly designed due to the language specifics, legacy or other reasons. @kshegunov Well I guess this design in Java has been made to drop the memory use in a drastic manner. In a small test, the WeakHashMap grows its size to 35146 Items with 33117 different keys (so not so many collisions). Still in the same test, there are 158719 calls to the getter of the cache. Which means I'm saving 158719 copies. My object is small yeah: 16 bytes (8 for the double, 8 for the pointer of the QMap). So I'm saving 2480 kB, i.e: 2.42Mo we can agree that this is not so much... but I imagine that in the long run, the app can run several large simulations and keep all those results so if we could arrive to 1000 times more this number of objects which make us reach 2.4GB... Well I guess I really need to have the proper worst scenario conditions but it could evolve with time so maybe even small saving of memory is a good thing to do.... Another thing is that this is a scientific application done in Java, Java only uses handles so like copy of pointers, never the object itself, so it seems to me that it is kind of the same that QT implicit sharing is offering. They decided to introduce this WeakHashTable in order to reduce the memory usage that was too huge. I think when you translate an app and need to have better performances (in term of speed of execution and memory usage) it is maybe better to take those kind of improvement in a first step and then check if it was really useful. If not, I'll be easily able to remove it. I will win in speed cause there won't be anymore any hashing to compute (which is quite complex in my case) but I'll see how worst the memory usage become... I'm going to try implementing the solution I proposed more up and let you know the results. Some kind of self centralizing Cache that weakly share pointers between objects doesn't sound such a bad thing to have especially when it is quite easy and fast to implement and test. A lot of guesses and assumptions. Let me crunch up some numbers for you then. QWeakPointeris at least one pointer in size (in actuality it is two pointers) so you have 16 bytes there. A QSharedPointeris one pointer + one heap allocation on a small structure to hold the external reference count, so you have yet another pointer and an atomic integer (at the very least), so 12 bytes. Say all shared pointers are copied as such and none are created from a raw pointer - that's 8 bytes per QSharedPointerobject + 12 bytes for the shared structure. And this is overhead only! And that's not accounting for the cost of the heap allocation itself if you create it out of a raw pointer ... The cost of a lookup in the map is log(N)where Nis the number of elements ~15 for a 35k elements, while the amortized cost for a hash or a simple vector is just O(1). Taking in mind that your object is 20 bytes in size, contiguous in memory, and by passing it by value you can skip any costly calls to the heap manager, what exactly do you think you're saving? It's not memory for sure and it ain't CPU time either. @kshegunov well I gonna think a bit more about how I could use Implicit Sharing. You're right, I didn't count my pointer overheads... :$ For "the fun of it", I've implemented it with a simple example using only a QString as a data. Here is the code You can see the behaviour I wish from the main: class MainObject{ public: MainObject(SharedObject *sharedObj):_sharedPtr(cache.getCentralizedValue(QSharedPointer<SharedObject>(sharedObj))) {} private: QSharedPointer<SharedObject> _sharedPtr; }; #include <QDebug> int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); MainObject *obj1 = new MainObject(new SharedObject("Object 1")), *obj1bis = new MainObject(new SharedObject("Object 1")); qDebug() << "[main] cache size after inserting two same instance: " << cache.size(); delete obj1; qDebug() << "[main] cache size after deleting 1 instance: " << cache.size(); delete obj1bis; qDebug() << "[main] cache size after deleting both instance: " << cache.size(); return a.exec(); } This outputs: [CentralizingWeakCache::getCentralizedValue] adding new value in cache : "Object 1" [CentralizingWeakCache::getCentralizedValue] getting centralized value for : "Object 1" [SharedObject::~SharedObject] destroying "Object 1" [main] cache size after inserting two same instance: 1 [main] cache size after deleting 1 instance: 1 [SharedObject::~SharedObject] destroying "Object 1" [CentralizingWeakCache::handleSharedObjectDestruction] removing centralized value: 1 [main] cache size after deleting both instance: 0 Anyway I'll give it more thought tomorrow. I'm still confuse how to merge different instances of my objects and thus use the implicit sharing. I guess I still some kind of a Cache in the middle so each MainObject can get a copy of the one it stores (the Cache) and delete the one it had created. Do you see what I mean? PS: indeed if I could avoid the hashing of my SharedObject (previously MyElement) this would be a great improvement in CPU usage... but I really don't see yet how to achieve it... I guess I'll make you a drawing tomorrow to illustrate what is blocking me. Thanks for your replies anyway ;) I think it may be worthwhile to step back a bit and take a look at the boundary conditions: - How costly is it to create a MyElement? - What data is used to create a MyElement, and where is it stored? - How often are MyElement-Classes shared between MainObjects, and what are typical numbers for sharing (i.e. is the same object shared twice or 100 times)? - Is there a meaningful way to combine multiple MyElement classes into a bigger group of object, where caching would make more sense (i.e. will it often happen that certain groups of MyElement are used together)? - Will properties of MyElement change after construction? - Do MyElements with the exact same mass have the same properties?? EDIT:). @Asperamanca said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?: I think it may be worthwhile to step back a bit and take a look at the boundary conditions: Indeed I think that's wise :) How costly is it to create a MyElement? Not much really: MyElement is only storing a QMap of (ushort, double). As the key is from an enum, I know there is a max number of entries which is less 500. In practice, I think most of them would have between 2 and 10 entries max. (I'll have to verify that with the business...) The mass in fact is not really needed, it is just a shortcut that represent the sum of the values of the Map. What data is used to create a MyElement, and where is it stored? Typically, the MyElement are creating by cloning an existing one and using an external app (in Fortran) that will do some heavy computations on it to at the end produce a new MyElement different from the one in Entry. All MyElements are stored in the Heap. How often are MyElement-Classes shared between MainObjects, and what are typical numbers for sharing (i.e. is the same object shared twice or 100 times)? I don't have access to that information. I'm not sure how I could hack the java WeakHashMap in order to get it. The only thing I made was to dump periodically the size of the cache, its number of distinct keys and finally increment a number each time an entry is found in the cache and thus shared. On a simple exemple, the figures are that: Max Nb MyElements in cache : 35146 (reused: 158719, nb keys: 33117) So I just know that there is 158719 objects that are shared among 35146 but I have no clue about the distribution. Is there a meaningful way to combine multiple MyElement classes into a bigger group of object, where caching would make more sense (i.e. will it often happen that certain groups of MyElement are used together)? This is something also implemented in the Java code but it wasn't used in production at the end. What they did was to use exactly the same principle of cache sharing on another object that encapsulate several MyElements. For what I saw, I think it is really worth it to keep the caching on MyElement and then maybe add the other caching on the bigger structure on top. This doesn't seem to me to be incompatible. Will properties of MyElement change after construction? No, they are final when stored in the cache to be shared. If they are in the cache, it means that at least 1 MainObject is referencing it. If finally the last MainObject stop using it, then the MyElement is automatically removed from the cache (cleaned up) This is done automatically by the Java WeakHashMap, and I've implemented that by making the MyElement a QObject and connecting a destroyed signal to the Map where I send the Key. Do MyElements with the exact same mass have the same properties? As I said, the mass is not really relevant. It is the sum of the properties. It is used for a quick access and a first fast way also to compare two MyElements? Well I'm not using a hash as a key. If you look at the code (it's here), the key is a QSharedPointer<WeakCacheKey>. WeakCacheKey being a wrapper on a QWeakPointer<MyElement>. The hashing function of a WeakCacheKey is storing the hashCode just to be able to remove an entry from the CentralizingWeakCache when an MyElement is destroyed. Indeed, at this point we can't dereference the MyElement to do its hash. In the normal insertion in the CentralizingWeakCache, I'm returning: qHash(*(sharedPtr.data())); that will ends up in the hashing function of the MyElement (called SharedObject in my implementation) inline uint qHash(const SharedObject & sharedObject) { return qHash(sharedObject._value); } If I've some collisions on the hash key, then it is operator== that is used on my keys which I've reimplement to derefence the weakPointers if possible inline bool operator ==(const QSharedPointer<WeakCacheKey> &left, const QSharedPointer<WeakCacheKey> &right) { // check if weakPtrs are null (same state) bool isNull = left->_weakPtr.isNull(), otherIsNull = right->_weakPtr.isNull(); if ( (isNull && !otherIsNull) || (!isNull && otherIsNull)) return false; if (isNull && otherIsNull) return true; // both weakPtrs are not null, lets get sharedPtrs QSharedPointer<SharedObject> ptr = left->_weakPtr.toStrongRef(), otherPtr = right->_weakPtr.toStrongRef(); isNull = ptr.isNull(), otherIsNull = otherPtr.isNull(); if ( (isNull && !otherIsNull) || (!isNull && otherIsNull)) return false; if (isNull && otherIsNull) return true; // Both sharedPtrs are not null, compare the values of the objects return *ptr == *otherPtr; } So I may have mistaken somewhere but for what I've debugged it looks it does what I want: The key of my cache is QWeakPointer<MyELement> but the hashing and comparison function are made on the value itself (if it still exists)). I'm not so familiar with heaps. I don't really need my cache to be sorted, I just need a quick access to the elements. I can't rely on the mass, but rather on the map of properties. So I imagine I would also need to kind of hash the map... so if I have to do that operation, it sounds more natural to use a QHash no? Thanks for your help :) I got a "real" use case. Just by running it until 16% of the simulation, here are the figures I got: [MB_TRACE] Max Nb elem in cache : 593051 (reused: 5752723, nb keys: 586598 so we're probably talking about millions of entries in the QHash. The hashing function is quite good, the collision rate is only 1.09% The sharing is quite huge, I've 5.75 millions of copies are avoided The thing I'm missing is to have an information on the number of MyElement that are automatically purged from the Hash because they're not used anymore by any MainObjects PS: I'll probably get some figures for the whole simulation tomorrow or the day after... Thanks for the clarifications. In that case, your approach looks feasible. A few minor points: - Why use QSharedPtr<WeakCacheKey> instead of WeakCacheKey directly? - - If you have class members that are only changed at construction time, make them const. If you build your algorithm on the decision that these cannot change, then you should make that intention explicit (possible compiler optimizations come as a bonus) @Asperamanca said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?: Why use QSharedPtr<WeakCacheKey> instead of WeakCacheKey directly? Well this is just to avoid circular header dependency and not include WeakCacheKey header in SharedObject.h but just a forward declaration. So SharedObject holds a pointer on its key on the Map. I made it a sharedPointer so I don't deal with the deletion. (I was also using it to test if the key wasn't nullptr and thus know that the SharedObject is stored in the Cache) Can I get a pointer on the key of a QHash? I don't think so right? that is why I've put directly the key of the QHash to be a pointer. You're totally right. I inspired myself by some code I've read before... (it's in french but I the code isn't, you can find it here an implementation of a WeakHashTable) I've updated the code in github, no need indeed to have neither the SharedObject nor the CentralizingWeakCache inheriting from QObject and to rely on signal/slot communication. When the Cache put a SharedObject in its QHash, it can just give the key and an handle on itself to the SharedObject. Then the destructor of the SharedObject looks like this: SharedObject::~SharedObject() { qDebug() << "[SharedObject::~SharedObject] destroying " << _value; if (_cache) _cache->remove(_key); } If you have class members that are only changed at construction time, make them const. If you build your algorithm on the decision that these cannot change, then you should make that intention explicit (possible compiler optimizations come as a bonus) well that is true but I said the property map is constant but it is only the case for the interesting instances we share. There are some temporary MyElement that are used in the app that can evolve before deciding to make it kind of constant. You see what I mean? So I'm not sure it would worth it to create 2 distinct class: the const MyElement and then TmpMyElement without the constness. I would need a copy to pass from the TmpMyElemnt to the const MyElement when I want to save it no? @mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?: Well this is just to avoid circular header dependency and not include WeakCacheKey header in SharedObject.h but just a forward declaration. If you give WeakCacheKey a cpp file, including a non-default destructor, you should be able to forward declare SharedObject, because after all, the WeakCacheKey only holds a pointer to SharedObject. Then you are free to include WeakCacheKey.h in SharedObject.h. There are some temporary MyElement that are used in the app that can evolve before deciding to make it kind of constant. You see what I mean? If the intention is that, normally you will not want to change a SharedObject, but in some cases you do, I would suggest this approach: class SharedObject { //...declarations... public: WeakCacheKey getCacheKey() const; QString getValue() const; private: void setValue(const QString& arg); friend class SharedObjectWriter; }; class SharedObjectWriter { public: SharedObjectWriter() = delete; SharedObjectWriter(QSharedPointer<SharedObject> sharedObject); void setValue(const QString& arg); private: QSharedPointer<SharedObject> _sharedObject; That way, you make it pretty clear that writing to a SharedObject is supposed to be an exception, and not the rule. You could also include checks that only SharedObjects without a valid cache key can be written to, etc. @Asperamanca about using a QSharedPointer<WeakCacheKey> as a key of my QHash, in fact there is a better reason than just the circular dependency in the headers. If I put a simple WeakCacheKey, then the qHash fonction that would be called is: uint qHash(const WeakCacheKey &cacheKey) so I'm not able to modify cacheKey... or for efficiency I want to store the result of the qHash inside. If I use a pointer or shared pointer, I'll end up in this one that I've just improved :) inline uint qHash(WeakCacheKey * cacheKey) { if (!cacheKey->_hashComputed && !cacheKey->_weakPtr.isNull()) { QSharedPointer<SharedObject> sharedPtr = cacheKey->_weakPtr.toStrongRef(); if (!sharedPtr.isNull()) cacheKey->setComputedHashCode(qHash(*(sharedPtr.data()))); // save the hashCode } return cacheKey->_hashCode; } I didn't add a cpp file to WeakCacheKey, rather I just took out the implementation of bool operator ==(const QSharedPointer<WeakCacheKey> &left, const QSharedPointer<WeakCacheKey> &right) and moved it into the CentralizingWeakCache. This way I can do only a forward declaration of SharedObject in WeakCacheKey. (I'm not able to push the changes on github from work but I'll do it tonight from home, the main thing is the addition of this WeakCacheKey::_hashComputed boolean initialized to false that the WeakCacheKey::setComputedHashCode set to true at the same time it saves the hashCode. I see your point with the SharedObjectWriter, you don't make the values const, you just don't expose them directly and return a copy... I thought you were suggesting to declare a const QString in SharedObject (or a const QMap<> in MyElement) I'll think about it, nice suggestion. @kshegunov well I got more thought about using only implicit sharing. So I make MyElement derive from QSharedData and the MainObjects using QSharedDataPointer<MyElement> The problem is that I will need to "merge" some instance so for this I need a QHash in the middle... Like in the solution I've implemented using WeakPointers. I guess both the key and the value would be a QSharedDataPointer on my value I want to make unique. so yeah I could fill the Cache like I'm doing with WeakPointers... return also the one already shared if there is one. The issue is that my Cache has a QSharedDataPointer and not a WeakPointer. So I'm never gonna knows when no more MainObjects are using a MyElement and thus when I could delete some entries in the cache. Do you see a way to do it? Something else about the overhead of SharedPointers, well I suppose internally there is exactly the same within QSharedData no? there must be a uint for the reference count and some synchronization objects to make it thread safe... I stand by my claim that you're doing nothing for a high price. Especially since I took a peek at the code you put up and I saw you're using a mutex to sync the access to the global object. I have only questions for you now. - Do you know how heavy a newis compared to a stack allocation? - Do you realize how many bytes you're creating of overhead just to keep pointers around? - Do you get that instead of doing a fully reentrant block-free copy of a small object you're locking a global structure that has your threads blocked for most of the time at high contention? Just to wrap it up in a nice small bite - you could do a million copies in the time that each of the threads gets its share from that global mutex, and that doesn't even account for the pointer overhead, the refcounting and all that ... I am going to repeat myself now: You're overengineering a solution to a problem that you don't have. In the end it is your code, it's your decision how you implement it, however what you asked me I answered already - you don't need any of that stuff above, C++ ain't Java! @kshegunov How do you create dynamically elements on the stack?... if I allocate on the heap it's because I don't have the choice, I don't know in advance if I will create object and how many... Something I don't understand, if I go with implicit sharing, I'll need to use QSharedDataPointer to share my QSharedData no? I think I can't just not factorize Items that are creating by different MainObjects. It's not only the 20Bytes of the objects, it is also the data they point to: the whole content of their QMap. I'm not in the easy situation where MainObjects are passing to each other a shared object, they create it on their own (in their corner, somewhere in the heap) and if they create the same MyElement, it is a shame to not factorize it. You see what I mean? by default I may never be able to use implicit sharing if I don't have some kind of manager or cache in the middle to merge 2 distinct MyElements in one that the MainObjects will be able to share... I think you may not get the use case... My MainObjects are creating dynamically some smaller objects MyElements, they don't pass them to each other, they do some computation on them and store them in a list. The thing is that those MainObjects will create many times the same instance of MyElement that another MainObject has already create. But it has no way to know about that so it can't just share it directly. I don't see how you could achieve it without an intermediate in the middle. Are you basically telling me that your way would be to never share any small objects, even if you have a billion of them? I'm not attempting to be insulting (just in case): I don't think you understood what @kshegunov has been saying. Regarding performance or anything actually. Instead of ask him/her/us this - why not just see how capable your hardware / design is, where it falls down first? Can you not do something like break your thoughts out? This feels so crazily complicated just for a couple of object types to be stored in memory? std::vector<MainObject> mainObjects std::vector<MyElement> elements std::map<MainObject*, MyElement*> objectElementMap - this is the std:: lib containers - the problem of storing objects in memory is totally done you just have to decide what is appropriate I'll reiterate @kshegunov's statement: You're overengineering a solution to a problem that you don't have. You are pre optimizing something you don't know. I'd argue you need to simplify your design first but who knows - maybe I just don't get it. I know I don't understand what you want / need / would be best. Stop trying to think how to squeeze every last drop - you are in c++ - you will get your time to optimize but just choose a dam std::vector and be done with it =) when it breaks, get another and maybe even a thread. You're missing the point just how fast c++ is - and also how little memory it uses (as opposed to frameworks and their ungodly GC .. et al. ) I'm not sure there's a clear answer but I'd recommend: stop trying to prematurly optimize (it likely won't be a problem from what I can best tell) take a look at your design simplify @6thC well, I've spent some time (1 month) to understand and analyse a Java application that I'm supposed to port in C++. It is an heavy memory/CPU consuming app that does many physics calculations. The Java app has been used for let's say the past 15 years and has been patched times to times to improve performances either to limit the memory usage or to parallelize calculations to gain in speed. I've been told that the WeakHashMap has drastically decrease the memory usage, and for what I saw, avoiding millions of copies, I think it makes sense, whatever the language used. I've a limited amount of time for a project quite complex (if I fail or am late there are daily fees...). I believe it is at the design phase at the beginning that you should consider most of the "actors / features" you will / may use to plan a way to combine them properly. Centralizing most of the elementary data structure seems to me something not obvious and that is better to plan in advance. To limit the risk that I must use no more memory than the current app and have at least equivalent speed, I prefer to start straight with the optimizations they used in Java, I'll be free later, if I've some time left (before the deadline), to check without if there is a feeling it is not needed and that saving a little in memory impact a lot in performance. And come on, Java is not as efficient than C++ but nowadays it has been quite optimized, I don't believe that only because I'm using C++ I can just throw out some optimization techniques that have been thought, tested and validated in Java. I have a problem with this statement: "told that the WeakHashMap has drastically decrease the memory usage" - mainly because this may be a perfectly valid statement - for java. You aren't in java anymore. You are closer to hardware, things work fast. This is not being me being a dick about langues hating - it's just a fact. Yes start with good design - that's why I suggested something completely different because I'm not confident I can help you if I cannot understand the basic components - all I see is 2 related objects and no basic types etc, I cannot see purpose or anything just the one scoped view / context you've presented. I have no idea of the big picture what or why you'd store things like - I am entirely prepared for a valid reason though I cannot see it myself. I'm not being condescending - I have tried reading it so many times but I have my own work to do too. So being limited to the smaller context and using your own object context/concepts: I tried to show you a way of sharing objects/data where you could: - use vectors - which: use very minimal memory amounts store objects in contiguous memory - why is this important? to utilize the cpu memory and not access the main memory which is much slower - maintain objects relationships - be fast - not have to think too hard on algorithms or access - the plan is to just fly through it (very, very quickly) The map I suggested was specifically: std::map<MainObject*, MyElement*> objectElementMap keyed on MainObject - so we can have multiple MyElements but we can objectElementMap.find(MainObject* - i.e. this); get the "shared" object* - do whatever resource guarding to not totally seg fault or corrupt your object state. it's not a map of object instances, just a ptr, ptr map and seriously, if you are processing everything anyhow, you could probably just fly through a vector<pair<ptr,ptr>> anyhow. You have my sympathies for being pushed into a shit spot by the sounds of things but - rush jobs - get rushed. It doesn't need to be shoddy, and I was just trying to break out your thinking. We 100% are not here telling you to throw out design or optimization thoughts - everything we have been telling you is with lightweight efficiency and fast access in mind... Prove you have a performance problem first. If you are stuck to your design, what are we even talking about - get it down and running already. Start collecting execution times and prove performance problems are even a concern. I do wish to help, I'm not sure I am at this point, I'm not here to condescend but do want to help. I abuse the shit out of my CPU and well, who knows - you can always make a cpu killer problem - yours doesn't sound that. I wouldn't be worried can c++ keep up with java though. It will beat the pants off it honestly. JAVA might be good for RAD (I think I just caught something saying that) - but everything else I hate. The memory footprint, the GC... sorry. GC was just a horrible and shit idea. I much prefer pay for what is used/you make and cleanup what you make. That's why we use Qt. C++ && Qt == fast rad gui Anyhow, not sure I'm helping anymore, good luck. Again, I wouldn't worry about performance / at all / prematurely. Prove it. Once you have one you will know and if you can't see and feel it - you can measure it at least. How can you measure a potential performance issue without a running process? It's all theory at that stage and sounds like you've been given a directive "get it done" @6thC The way I understood it, the main concern is memory size, not CPU usage. I don't see how your approach covers that. How can you safely re-use objects in a vector, and know when these objects can be safely destroyed, unless you add some kind of key-access-approach and reference counting? @6thC said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?: use vectors - which: use very minimal memory amounts That dies as soon as you use a Qt-container within MyElement, such as QMap/QHash. So you will have the actual content of the objects all over the heap anyway. If the primary goal were performance, I'd agree with you - use contagious memory and utilize the memory caches. But that doesn't seem to be the case here. @6thC what means RAD? what I'm trying to achieve here is to reduce the memory that the app will use. so I give you again the big picture. I've a Vector of 150 MainObjects. During a simulation that I have to manage sequentially, those MainObjects will call an external Fortran application that will do some heavy computations (between 1 to 6 seconds) to then send back a MyElement that they (the MainObject) will store. So the MyElements are different instances. They can't be implicitly shared as they're not passed from a MainObject to another. Now here is some figures of a real use case I got yesterday: [MB_TRACE] Nb elem in cache : 5059084 (reused: 39457190, nb keys: 5047742 So: there are 5059084 + 39457190 MyElements that are created. 45 Millions! If I do nothing, I've 45 millions distinct MyElements created in distinct part of the heap and that does not share any of their content. MyElement is a double plus map so 20 Bytes. The map entries are (short, double), let's say I've 10 entries, this mean 100 Bytes. So in total I get 120 Bytes. 120 Bytes * 45 Millions = 5.03 GB That is just too much! What the log is telling me is that in fact there are only 5 Millions distinct MyElements (in value) within those 45 Millions. So I'm wasting 40 Millions times the 100 Bytes of the content of the MyElements Maps. Do you see the point now? Whether I'm in Java or in C++, it doesn't change anything to that fact, I don't want to have 40 millions times 100 Bytes in my heap that could be merged. (that 4 GB lost) So I need a intermediate object that play the role of a Cache. When a MainObject gets back a brand new MyElement (from the external app), it must ask to the cache if it someone has already created this MyElement and if it is the case the Cache send back a pointer on the SHARED instance of the MyElement. The MainObject will destroy the brand new MyElement it got from the external app and use the one of the Cache. I can't describe it more. For me the need is obvious... Cause potentially I wish to run several simulations in my app and keep their results in memory. I can't just do that 5 times if each one eats like 6GB. We can have access to big workstations but what's the point to waste memory... you would rather use a server that could store up to 20 simulation results.... Anyway I'm going to implement a Centralized cache to store the MyElements. The goal is just to share the MyElements. So the basic structure that come in mind would be a Set or a Vector. This is just not efficient as I'll have 5 Millions entries and will need to find some particular instances all the time (comparing by value). A map also wouldn't be efficient. For me there are no doubt that the best way to go is a Hash. Especially when I know that I've a good hashing function, that gives me less than 0.5% collisions (100−5047742×100/5059084 = 0.22%) Now the question is (and that is why I started this post) which technology to use within this hash... Implicit shared data (QSharedDataPointer) or standard smart pointers (QWeakPointer and QSharedPointer). Well the advantage I see with the standard smart pointer is the QWeakPointer. If my cache store only QWeakPointers, that means it doesn't "own" the data it is storing, which means that when nobody is using the data the WeakPointer is Null and thus it is possible to remove that entry. That is what I am achieving with the code I've developed. You might find it over engineered but that is what it does and what I'm after. You didn't explain me (and this is what I would like you to do if it is possible) how I could do the same using implicit sharing within the Cache. I'm not familiar with implicit sharing, I'm a passive user of it, just using the QT object that already use implicit sharing behind the scene (QString, QMap...) If I have well understood, if I'd like to use implicit sharing in my case, I would make MyElement derive from QSharedData and so my cache would become a QHash<QSharedDataPointer<MyElement>, QSharedDataPointer<MyElement>> So if I'm right, in that case how would you know that an entry in my QHash is not used anymore. In other term that the reference count of the MyElement has dropped to 2. (I say 2 because 1 for the key + 1 for the value QSharedDataPointer) I don't see the solution, I would say like this that it wouldn't be possible.... Let me know if you see a way... @Asperamanca Thanks for the understanding ;) haha Indeed, the need here is only to decrease memory usage and preferably in the most optimized way in term of access time. For the CPU usage, well it doesn't depend on me, the consuming part is the external Fortran app. The only thing I can do to boost the performance is to parallelize the maximum of instance of those calculation. Basically I'm achieving this by using lazy evaluation for the calculations of the MyElements. I've opened a post few weeks ago on this topic. (this one) Some might also find it too much over engineered but well, it is what it is and does the job perfectly. I mean I hope it will, my app is not implemented yet... (I've at least few months work before being able to test ) Short answer on implicit sharing: It's not for your use case, because the use count would never automatically fall back to zero, because of the cache. Basically, implicit sharing uses the same principles as shared pointers under the hood. The advantage is that you can completely hide it from users of your class, the users can simply value-copy your classes for cheap. The class will automatically create deep copies whenever a non-const member function accessing the data is called. It's wonderful when you follow the standard implementation pattern: Private data member derived from QSharedData, QSharedDataPointer as only data member of the main class, consistent use of 'const' for methods. Lacking a weak pointer equivalent of QSharedDataPointer, and the ability to manipulate the reference count, I would keep my hands off in your case. Use QSharedPointer or std::shared_ptr instead. If you get tired of writing std::shared_ptr<MyElement>, you can make a typedef. @Asperamanca ok thanks for the answer, that is what I thought for the implicit sharing but I was feeling maybe there could be a way... I'm using c++11 "using" keyword nowadays instead of typedef, I find it more convenient for functors and more readable in general. I'm also using QSharedPointer and QWeakPointer instead of std ones directly. I guess/hope they are using them under the hood. I just prefer to stay full QT in my code if possible. I think I'm only using std for the math library and std::sort. @mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?: The only thing I can do to boost the performance is to parallelize the maximum of instance of those calculation. About this: Your current approach using a mutex to protect your cache-hash could become a bottleneck. I would consider splitting operations on the cache cleanly into read and write operations, and use QReadLocker and QWriteLocker, since based on your numbers, 9 out of 10 times an object that you need should already exist in cache (which would make the access a read-only thing). In addition, you could then further optimize by balancing the number of write operations vs. the length of a single write operation. A way to do this would be to delete cache entries not directly, but via a delete queue: You put entries for deleting into a queue, and process the queue at defined intervals. You can then fine-tune how often the queue is processed. @Asperamanca wow cool I wasn't aware that QReadWriteLock existed! It will definitely improve performances as there should be more reading operations that writing ones. (a factor 10 maybe) and yeah reading shouldn't be blocking as long as nobody is trying to write. Thanks for that. The idea of queueing the deletion is also great! I can just do it like every 1000 entries or more (as you said I can tune that later) Cheers! - kshegunov Qt Champions 2017 @mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?: Do you see the point now? I don't! QMapand all the Qt containers are implicitly shared already. Non writing calls to the map will not cause memory copy and all the objects are going to point to the same structure in heap. Your calculation of gigabytes of data is just bogus. Whether I'm in Java or in C++, it doesn't change anything to that fact, I don't want to have 40 millions times 100 Bytes in my heap that could be merged. (that 4 GB lost) Sure you don't want to, however it would be useful to get familiar with the specifics of the language you're now using and what is happening behind the scenes before you decide to microoptimize something that's not even a problem. On that note did you create a test case that demonstrates how fast and how much less memory your weak-referenced mutex-protected cache is compared to what I suggested - directly passing those objects by value? And I mean a test case not some calculations that we run on fingers ... So I need a intermediate object that play the role of a Cache. No you don't need that, and I fail to see why are we continuing this argument. Run a test, and then show me your aces! Show me how fast is that cache and how much memory it spares! I mean, I've been coding C++ for more than 10 years, convince me that I should throw that experience away and trust you instead. Now the question is (and that is why I started this post) which technology to use within this hash... Implicit shared data (QSharedDataPointer) or standard smart pointers (QWeakPointer and QSharedPointer). Moot due to the above. If I have well understood, if I'd like to use implicit sharing in my case, I would make MyElement derive from QSharedData and so my cache would become a You have understood correctly. You still don't need it, but you can do it like this. @kshegunov said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?: I don't! QMap and all the Qt containers are implicitly shared already. Non writing calls to the map will not cause memory copy and all the objects are going to point to the same structure in heap. Your calculation of gigabytes of data is just bogus.. Of course it would be better if the program didn't produce multiple copies of identical objects in the first place, but these seem to be the boundary conditions. @kshegunov well I'm getting tired of not understanding each other. As said @Asperamanca... So how would you do share them implicitly? I guess it is just impossible... Or please tell me your solution. I don't see what is wrong with my GB calculation... If you understand what I'm saying above about my use case, I don't think there is any bug.... This is what I get by creating the objects if I'm not able to merge them... And merging objects is not something that implicit sharing seems to offer... I don't have time to create a test for you, I will test my app but there are still at least 2 months work before I'll be able in a state to do it... For me it is obvious in term of memory. If I don't use a cache, it's equivalent to have a loop that would create millions of NEW objects (in the heap). what is the size in memory? millions multiplied by the size of the object.... I can't have those objects in the stack, they are created dynamically... I can't pass them by value... what I can pass by value is just QSharedDataPointer or a QSharedPointer. (or the raw pointer but it is too risky in my use case...) What is the point to pass by value MyElement if MyElement has nothing else than a QSharedDataPointer pointing on a QSharedData?... @mbruel said in Weak QHash per value? what do you think about QHash<MyElement, QWeakPointer<MyElement>> ?: well I'm getting tired of not understanding each other. To be honest me too, a bit. As said @Asperamanca... Fine, I misunderstood, but do you think that a map of weak pointers to heap allocated objects that are created as shared pointers is better than just QSet<Element>? So how would you do share them implicitly? You wrote it in your last post, I confirmed this is the way to do it. Derive Elementfrom QSharedDataand pass QSharedDataPointer<Element>around will do with the sharing. I don't have time to create a test for you, I will test my app but there are still at least 2 months work before I'll be able in a state to do it... Well, I did a small test case for you, just to illustrate how contention over a global object eats up the CPU time. Here it goes: main.cpp int main(int argc, char *argv[]) { QApplication a(argc, argv); QTextStream out(stdout); QElapsedTimer cacheTimer; static const int count = 4; CacheThread cacheThreads[count]; // Run the threads with caching and benchmark the time cacheTimer.start(); for (int i = 0; i < count; ++i) cacheThreads[i].start(); // Wait to finish for (int i = 0; i < count; ++i) cacheThreads[i].wait(); out << "Threads with caching (" << CacheThread::cached / double(count * iterations) << " of " << CacheThread::cache.size() << "): " << cacheTimer.elapsed() << endl; // Run the threads with copy and benchmark the time CopyThread copyThreads[count]; QElapsedTimer copyTimer; copyTimer.start(); for (int i = 0; i < count; ++i) copyThreads[i].start(); // Wait to finish for (int i = 0; i < count; ++i) copyThreads[i].wait(); out << "Threads with copy: " << copyTimer.elapsed() << endl; return 0; } cachethread.h #ifndef CACHETHREAD_H #define CACHETHREAD_H #include <QHash> #include <QMap> #include <QThread> #include <QReadWriteLock> #include <QRandomGenerator> class Element { public: Element(); Element(const Element &) = default; Element(Element &&) = default; public: double mass; QMap<ushort, double> properties; }; inline Element::Element() : mass(0) { } extern const uint iterations; class CacheThread : public QThread { public: static QHash<uint, Element *> cache; static QReadWriteLock lock; QRandomGenerator idgen; CacheThread(); uint processedItems; static QAtomicInteger<uint> cached; void run() override; }; inline CacheThread::CacheThread() : QThread(), processedItems(0) { } class CopyThread : public QThread { public: CopyThread(); QRandomGenerator idgen; uint processedItems; void run() override; }; inline CopyThread::CopyThread() : QThread(), processedItems(0) { } #endif // CACHETHREAD_H cachethread.cpp #include "cachethread.h" #include <QDebug> QHash<uint, Element *> CacheThread::cache; QReadWriteLock CacheThread::lock; const uint iterations = 5000000; QAtomicInteger<uint> CacheThread::cached = 0; void CacheThread::run() { qDebug() << QThread::currentThreadId(); while (processedItems < iterations) { // Generate a key to check if exists uint id = idgen.bounded(0u, std::numeric_limits<uint>::max()); // Our local element that we are going to use for comparison Element newElement; // Read and try to find in hash lock.lockForRead(); QHash<uint, Element *>::Iterator iterator = cache.find(id); if (iterator != cache.end()) { lock.unlock(); // We have found it. Element * ourElement = iterator.value(); processedItems++; cached++; continue; } // Not found, lock for writing to insert into the hash lock.unlock(); processedItems++; lock.lockForWrite(); cache.insert(id, new Element(newElement)); lock.unlock(); } } void CopyThread::run() { qDebug() << QThread::currentThreadId(); while (processedItems < iterations) { // Generate a key (due to symmetry with CacheThread) uint id = idgen.bounded(0u, std::numeric_limits<uint>::max()); // Create one element object Element myElement; // Copy the element object instead of using caches, hashes and so on (i.e. pass a copy to some other function) volatile Element myCopiedElement(myElement); // Prevent the compiler from optimizing out that object copy (just for test purposes) processedItems++; } } Here's output (in release mode): Debugging starts 0x7fffd77fe700 0x7fffd6ffd700 0x7fffd7fff700 0x7fffd67fc700 Threads with caching (0.750027 of 4997101): 19736 0x7fffd77fe700 0x7fffd7fff700 0x7fffd67fc700 0x7fffd6ffd700 Threads with copy: 148 Debugging has finished For me it is obvious in term of memory. If I don't use a cache, it's equivalent to have a loop that would create millions of NEW objects (in the heap). what is the size in memory? millions multiplied by the size of the object.... Then it bugs me, why did you choose a binary tree (i.e. QMap) to keep the properties instead of a vector, you just have more overhead. And, since your object is already tiny, what's wrong of caching only the properties and constructing it out of the property map? Then you don't have a care in the world, just making bitwise copies (as the map or vector or whatever would already be constant) ... I can't have those objects in the stack, they are created dynamically... Says who? What prevents you from creating them on the stack (look at the test case)? What is the point to pass by value MyElement if MyElement has nothing else than a QSharedDataPointer pointing on a QSharedData?... This I don't follow. PS. As for heap vs stack: void HeapThread::run() { while (processedItems < iterations) { Element * volatile x = new Element; delete x; processedItems++; } } void StackThread::run() { while (processedItems < iterations) { volatile Element myElement; processedItems++; } } Timings (in ms, iterations = 10000000): Heap: 245 Stack: 32
https://forum.qt.io/topic/93421/weak-qhash-per-value-what-do-you-think-about-qhash-myelement-qweakpointer-myelement
CC-MAIN-2018-51
refinedweb
9,816
70.33
Recently I have a received a few emails asking questions about how to use the version control client APIs to create workspaces and check in files with the most recent one also asking if it could be used with TFS Impersonation to record the check ins as if they had been done by a different user. Since I am in the process of transferring over to the Version Control Server team I thought this would be a great place for me to dig in and use the APIs to put together a code example to answer these questions. First of all, if you are new to the TFS 2010 OM you may want to start with this post which introduces the TfsConnection, TfsConfigurationServer and TfsTeamProjectCollection objects. Secondly, if you have not used TFS Impersonation before you will surely find it useful to read up on its introductory post here. In order to use the code from the examples in this post you will need the following include statements: using System; using System.IO; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.VersionControl.Client; Let’s start by defining a scenario that we want to solve with this example. Since we want to use TFS Impersonation along with the version control client OM a scenario that seems to work well would be one in which a service or a job checks in edits to a file on behalf of a user that triggers some event. In order to have some data for our example to process, let’s define some variables up front. This, of course, would need to come from some type of configuration file or user input if this example were to be turned into production code. // Holds the uri that points to the team project collection we will communicate with. Uri uriToTeamProjectCollection; // The account name of the user we will be performing this check in on behalf of. String userAccountName; // The server path of the file in TFS that we will be modifying. String serverPathForFile; // The place on the local file system where we want to download the file from TFS String localWorkspacePathForFile; // Let's assume for this example that the file actually already exists on disk in the form that // we want to send it to the server as. That is, we want to copy the contents of this file // to send to the server. This is just our way of making changes to the file. String localCopyOfFile; Since I wrote the majority of the TFS Impersonation code and haven’t touched the version control client OM before, let’s start with the easy stuff first :). As you know from reading the TFS Impersonation post above we will actually have to make two connections to our collection in order to perform this scenario. We will use the first connection to determine the IdentityDescriptor for the user we will be impersonating since that is the piece of information needed to perform impersonation: TfsTeamProjectCollection baseUserTpcConnection = new TfsTeamProjectCollection(uriToTeamProjectCollection); IIdentityManagementService ims = baseUserTpcConnection.GetService<IIdentityManagementService>(); // Read out the identity of the user we want to impersonate TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, userAccountName, MembershipQuery.None, ReadIdentityOptions.None); Now that we have the identity of the user that we want to impersonate the next step is to actually create the impersonated connection to the collection. But wait! This is a great time to remember that in order for impersonation to work, the user that the process is running as must have the “Make requests on behalf of others” permission for this collection. If the user does not have this permission then you are going to see an Access Denied error message. So, now that that is taken care of, let’s actually create the impersonated connection to the collection. TfsTeamProjectCollection impersonatedTpcConnection = new TfsTeamProjectCollection(uriToTeamProjectCollection, identity.Descriptor); Great! So now we have our impersonated connection. The next step is to check in the changes to a file using this impersonated connection. In order to this, we must start by creating a workspace and setting up working folder mappings so that we can download the existing file on the server. Note, that for any of you out there wondering, the following code would work in a non-impersonation scenario as well. VersionControlServer sourceControl = impersonatedTpcConnection.GetService<VersionControlServer>(); // Create the workspace Workspace workspace = sourceControl.CreateWorkspace(String.Format("MyTempWorkspace-{0}", DateTime.Now)); // Map the file that we want to change workspace.CreateMapping(new WorkingFolder(serverPathForFile, localWorkspacePathForFile)); In the above code, we get the VersionControlServer object because it is the source for performing all version control operations. Next, I create a workspace with a name that has a the current time included in it. The reason that I use a time stamp within the name is because the name of a workspace for a given computer must be unique and although we will try to clean up this workspace later, we do not want a crash of the program to cause us to fail when trying to create the next workspace due to a naming conflict. Finally, we map the server path for the file to a local path. If you are going to change multiple files you do not need to create multiple mappings but can instead pick a root folder that both of the files live in. The next thing we need to do is actually download the file that exists on the TFS server. In our case, this is a simple as just calling the following method: // Download the file workspace.Get(); Now that we have the file downloaded locally we could add an optimization that terminates execution if the file that we want to upload is the same as the existing file. For brevity, I will avoid that optimization and leave that to you if you feel that you need it. Thus, the next step is to pend the edit that we want to make and then copy in the contents of the file to the file that is mapped in the workspace: // Pend the edit on the file to remove the read-only bit workspace.PendEdit(serverPathForFile); // Copy our file to the mapped location File.Copy(localCopyOfFile, localWorkspacePathForFile, true); Note that in the PendEdit call, I could have passed either the server path or the local path. Finally, all we have left to do is check in the changes that we have pended and clean up our workspace. This can be done via the following two calls: // CheckIn the file workspace.CheckIn(new WorkspaceCheckInParameters( workspace.GetPendingChangesEnumerable(), "Comment detailing what happened.")); // Clean up the workspace workspace.Delete(); And voila! The changes to the file are now on the server and they will be recorded as if the impersonated user had performed the operation. Also, it is good to know that if you didn’t implement the optimization of checking if the files and they happen to be the same then no check in will actually happen. Hopefully that gives you a better understanding of both how TFS Impersonation works and how the version control client OM works. Please send me any questions you have either via the contact link above or the comment section below. Happy Coding!
http://blogs.msdn.com/b/taylaf/archive/2010/03/29/using-tfs-impersonation-with-the-version-control-client-apis.aspx
CC-MAIN-2015-48
refinedweb
1,202
51.38
Linear regression allows us to model the relationship between variables. This might allow us to predict a future outcome if we already know some information, or give us an insight into what is needed to reach a goal. To fit a linear regression model, we need one dependent variable, which we will study the changes of as one or more independent variables are changed. As an example, we could model how many goals are scored (dependent variable), as more shots are taken (independent variable). As we have just one independent variable, this is a simple linear regression – models that take in multiple independent variables are are known as multiple linear regressions. This article is going to apply a simple linear regression model to squad value data against performance in the Premier League. This might help us to see how much a squad might need to invest to avoid relegation, make European spots or to create a data-driven target for our team. The steps that we are going to take include a quick look & explore of our dataset, creating the model & then making some assessments on the back of it. Then, we’ll calculate a better metric to improve our model. We will use the sklearn module to make this much less intimidating than it might seem right now! Let’s get the modules in place and read in a local dataset called positionsvsValue – which you can download here. Initial set-up & exploration import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn import metrics #load data data = pd.read_csv("positionsvsValue.csv") data.head() data.describe() So we have a 220-row dataset, with each row being a team in each Premier League season since 2008/09. For each of the teams, we get squad sizes, ages, squad value (in Euros) as well as performance data with goal difference, points & position. The values are taken from Transfermarkt (once again, you can find the data here). Our aim is to get a model together that would help us to predict a team’s points based on their squad value. Before we do that, we should check to see what the relationships are among some of the key variables. Let’s do that visually with a pair plot. sns.pairplot(data[['Season','GD', 'Squad Value', 'Points', 'Position']]) <seaborn.axisgrid.PairGrid at 0x1a261dbb50> Some interesting points to keep in mind: - Points & goal difference correlate really strongly, as you might expect. - Squad value goes up as goal difference and points go up, but as more of a curve than a line. - Squad value has increased over time (important! We’ll come back to this) Thinking back to our initial problem – modelling squad value on performance – we need to define what performance is. I think that we can answer this by seeing which of points and position correlate more with squad value. Let’s check if position correlates more than points: abs(data['Squad Value'].corr(data['Position'])) > data['Squad Value'].corr(data['Points']) False Seemingly not, so for the purpose of the article, we’re going to build our model around how many points you should expect for your squad value, not the position. Building our Model So let’s get to it. We’ll take the following steps: 1) Get and reshape the two columns that we want to use in our model: Points & Squad Value 2) Split each of the two variables into a training set, and a test set. The train set will build our model, the test set will allow us to see how good the model is. 3) Create an empty linear regression model, then fit it against our two training sets 4) Examine and test the model Let’s work through each step #1- Get our two columns into variables, then reshape them X = data['Squad Value'] y = data['Points'] X = X.values.reshape(-1,1) y = y.values.reshape(-1,1) We can use train_test_split to easily create our training and test sets. There are a few arguments we have to pass, in addition to the variables that will be split. There is test_size, which tells the function what % of the split should be in the test side. Random_state is not necessary, but it sets a starting point for the random number generation involved in the split – if you want your data to look like this tutorial, keep this the same. #2- Use the train_test_split function to create our training sets & test sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=101) Next up is creating the empty model, then fitting it with our training data. The sklearn package means that this only takes a couple of lines: lm = LinearRegression() lm.fit(X_train,y_train) LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) Holy shit, you’ve just made a linear regression model! Bit of an anticlimax until we do something with it… The final part is examining the model. This means seeing what conclusions it gives to answer our main question (value -> performance), and importantly, how valid they are. We can start by checking the coefficient. This is the amount that we expect our response variable (points) to change for every unit that our predictor variable changes (squad value in m Euros). Simply, for every extra million we put into our squad value, how many extra points should we get? We find out with the .coef_ method of the model. print(lm.coef_) [[0.07152655]] So on average, an extra million gets you 0.07 points. Looks like we’re going to need an absolute warchest to stay up. We now need to test the model by checking predictions from the trained model against the test data that we know is true. Let’s check out a few ways of doing this. Firstly, we’ll create some predictions using lm.predict – we’ll feed it the real squad value data, and it will predict the points based on the model. Then we’ll use this in 2 charts, firstly plotting the real data against the prediction line, then plotting the prediction against the true data. predictions = lm.predict(X_test) plt.scatter(X_test, y_test, color='purple') plt.plot(X_test, predictions, color='green', linewidth=3) plt.title("EPL Squad value vs points - Model One") plt.show() plt.scatter(y_test,predictions) <matplotlib.collections.PathCollection at 0x1a27b8ab90> Lots of values that match up well, and lots that don’t. Tough to see how far we are out, though. So let’s get a histogram to plot the differences between the predictions and the true data: plt.title('How many points out is each prediction?') sns.distplot((y_test-predictions),bins=50, color = 'purple') A few where we are way out, like 30-40 points out. But mostly, we are within 10 points or so either way. We are going to look to improve this, so to help with the comparison let’s use a metric called ‘mean absolute error’. This is simply the average difference between the prediction and the truth. Hopefully, we can reduce this with the next one. print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, predictions)) Mean Absolute Error: 9.728206663986418 Alternatively, we could put these in a table, rather than plot them. But that is a bit less friendly to work through. df = pd.DataFrame({'Actual': y_test.flatten(), 'Predicted': predictions.flatten()}) df.head() df['Actual'].corr(df['Predicted']) 0.6540205213240837 Improving the model When we took an exploratory look at the data, we found that team values had increased over seasons. As such, comparing a 100m squad in 2008 to a 100m squad in 2018 probably isn’t fair. To counter this, we are going to create a new ‘Relative Value’ column. This will take each team in a season, and divide it by the highest value in that league. These values will be between 0 & 1 and give a better impression of comparative buying power, hence performance in the league. Hopefully it will provide for a better model than the example above. Let’s create this column as a list, then add it to our dataframe. #Blank list relativeValue = [] #Loop through each row for index, team in data.iterrows(): #Obtain which season we are looking at season = team['Season'] #Create a new dataframe with just this season teamseason = data[data['Season'] == season] #Find the max value maxvalue = teamseason['Squad Value'].max() #Divide this row's value by the max value for the season tempRelativeValue = team['Squad Value']/maxvalue #Append it to our list relativeValue.append(tempRelativeValue) #Add list to new column in main dataframe data["Relative Value"] = relativeValue data.head() Looking good, the 4 teams below Chelsea do indeed have lower squad values, as represented by lower relative values. Let’s get a pairplot to check out the new column’s relationship with the others. sns.pairplot(data[['GD', 'Squad Value', 'Relative Value', 'Points', 'Position']]) <seaborn.axisgrid.PairGrid at 0x1a27e22950> Looks quite similar to the squad value relationships in many parts, but looks to have a stronger correlation with points and goal difference. Hopefully this will give us a more accurate model. Let’s create a new one in the same way as above #Assign relevant columns to variables and reshape them X = data['Relative Value'] y = data['Points'] X = X.values.reshape(-1,1) y = y.values.reshape(-1,1) #Create training and test sets for each of the two variables X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=101) #Create an empty model, then train it against the variables lm = LinearRegression() lm.fit(X_train,y_train) LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False) And we’ll again look at the coefficient to see what our model tells us to expect. We’ll divide it by 10, to see how many points increasing our squad value by 10% of the most expensive team should earn print(lm.coef_/10) [[5.31884201]] predictions = lm.predict(X_test) plt.scatter(X_test, y_test, color='purple') plt.plot(X_test, predictions, color='green', linewidth=3) plt.title("Relative Squad value vs points - Model Two") plt.show() The model predicts just over 5 points. This seems to make sense, as the difference between top and bottom would often range around 53 or so points. So for every 10% that you are off of the most expensive team, our model suggests that you should expect to drop 5.3 points. Let’s run the same tests as before to check out whether or not this new model performs better. Firstly, the same two charts – the scatter plot & the distribution of the errors. The scatter plot looks to to have more of a correlation and the distribution also is a bit tighter, with fewer big errors. plt.scatter(y_test,predictions) <matplotlib.collections.PathCollection at 0x1a28ae3450> plt.title('How many points out is each prediction?') sns.distplot((y_test-predictions),bins=50,color='purple'); To back up the eye test, we’ll use our mean absolute error metric – the average difference between the prediction and the truth. Our previous metric was 9.7… print('MAE:', metrics.mean_absolute_error(y_test, predictions)) MAE: 8.972066563663786 So that’s nearly an 8% improvement… not a gamechanger, but I think we can agree that this model makes more sense than the one before. Not only does it fit better (correlation between predictions/reality also increased significantly), but we know from our own knowledge of football that transfer fees and market values have hugely inflated over the length of our dataset. There are other oddities that you will have noticed, such as the extreme outliers (Leicester 15/16, Chelsea 15/16, Chelsea 18/19), the cluster of teams around the relegation places. All of these could do with their own further analysis, but that is beyond the scope of this tutorial. Would make for a really interesting piece itself if you fancy trying your hand at this! Summary That just about covers off our simple linear regression 101 – let’s summarise what we learned. 1) Simple linear regression is an approach to explaining how one variable may affect another. 2) We built a model where we see how squad value affects points. 3) We observed what the model suggested and saw how many points an extra million spent might gain. 4) We checked the validity of the model and saw what the average error was. 5) We repeated the above with another (new) metric to create an improved model, reducing the error. Great effort making it this far. For developing these concepts, you may want to gather data from other leagues to see if squad value is as closely related to winning as it is here. Otherwise, with aggregated event data, you could look to see how reliable shots or passes are as goal predictors. As for building your stats model knowledge, take a read on multiple linear regressions and we will look to have an article up on this topic soon! Any questions, you’ll find us on Twitter @fc_python.
https://fcpython.com/machine-learning/introduction-to-simple-linear-regression-in-python
CC-MAIN-2021-43
refinedweb
2,186
64.2
dan wrote: > This isn't a huge pain in the simple case, but it quickly becomes > annoying when I want to do the equivalent of > > if (/(\d+)\s+(\d+)/) { > ($num1, $num2) = ($1, $2); > } elsif (/(\w+)\s+(\w+)/) { > ($word1, $word2) = ($1, $2): > } # etc. > > as the two-part test in Python doesn't lend itself easily to a long > if/elif/elif chain. I'm tempted to mention the "replace nested conditionals with guard clauses" refactoring rule, but I'll leave that for another day... > I've "solved" the problem locally by using the following helper > function: > > # research (regexp, string) is the same as regexp.search (string), > # but saves off the match results into 'rematch', so we can test for > # a regexp in an if statement and use the results immediately. > rematch = None > def research (regexp, string): > global rematch > rematch = regexp.search (string) > return (rematch != None) > > So that I can write: > > if research (r"(\d+)\s+(\d+)", line): > (num1, num2) = rematch.groups() > elif research (r"(\w+)\s+(\w+)", line): > (word1, word2) = rematch.groups() > # etc. > > I suppose I can even inject research into the re module, and inject a > similar method into regular expression objects. etc, to make it nicer. > > Is there a cleaner, or more approved, way, to accomplish this task? > If not, does it make any sense to have a re.last_match object that > automatically contains the last match, allowing, for example: > > if re.search (r"(\d+)\s+(\d+)", line): > (num1, num2) = re.last_match.groups() > > Or is that too side-effecty and non-Pythonic? Won't fly -- what if two threads are using the same regular expression? (or in your rematch example, what if two threads are using regular expressions...) ::: There's actually a slightly experimental feature in SRE that can be useful here: combine your expressions into one big expression, and use the new "lastgroup" attribute to figure out which one that matched: >>> import re >>> p = re.compile("(?P<digits>\d+)|(?P<text>\w+)") >>> m = p.search("123 456") >>> print m.lastgroup, m.groups() digits ('123', None) (however, keeping track of subgroups can be a major PITA with this approach...) </F> <!-- daily news from the python universe: --> Sent via Deja.com Before you buy.
https://mail.python.org/pipermail/python-list/2000-September/022482.html
CC-MAIN-2017-17
refinedweb
364
66.44
import "gopkg.in/src-d/go-vitess.v1/vt/vttablet/heartbeat" Package heartbeat contains a writer and reader of heartbeats for a master-slave cluster. This is similar to Percona's pt-heartbeat, and is meant to supplement the information returned from SHOW SLAVE STATUS. In some circumstances, lag returned from SHOW SLAVE STATUS is incorrect and is at best only at 1 second resolution. The heartbeat package directly tests replication by writing a record with a timestamp on the master, and comparing that timestamp after reading it on the slave. This happens at the interval defined by heartbeat_interval. Note: the lag reported will be affected by clock drift, so it is recommended to run ntpd or similar. The data collected by the heartbeat package is made available in /debug/vars in counters prefixed by Heartbeat*. It's additionally used as a source for healthchecks and will impact the serving state of a tablet, if enabled. The heartbeat interval is purposefully kept distinct from the health check interval because lag measurement requires more frequent polling that the healthcheck typically is configured for. heartbeat.go reader.go writer.go Reader reads the heartbeat table at a configured interval in order to calculate replication lag. It is meant to be run on a slave, and paired with a Writer on a master. It's primarily created and launched from Reporter. Lag is calculated by comparing the most recent timestamp in the heartbeat table against the current time at read time. This value is reported in metrics and also to the healthchecks. func NewReader(checker connpool.MySQLChecker, config tabletenv.TabletConfig) *Reader NewReader returns a new heartbeat reader. Close cancels the watchHeartbeat periodic ticker and closes the db pool. A reader object can be re-opened after closing. GetLatest returns the most recently recorded lag measurement or error encountered. Init does last minute initialization of db settings, such as dbName and keyspaceShard InitDBConfig must be called before Init. Open starts the heartbeat ticker and opens the db pool. It may be called multiple times, as long as it was closed since last invocation. Writer runs on master tablets and writes heartbeats to the _vt.heartbeat table at a regular interval, defined by heartbeat_interval. func NewWriter(checker connpool.MySQLChecker, alias topodatapb.TabletAlias, config tabletenv.TabletConfig) *Writer NewWriter creates a new Writer. Close closes the Writer's db connection and stops the periodic ticker. A writer object can be re-opened after closing. Init runs at tablet startup and last minute initialization of db settings, and creates the necessary tables for heartbeat. InitDBConfig must be called before Init. Open sets up the Writer's db connection and launches the ticker responsible for periodically writing to the heartbeat table. Open may be called multiple times, as long as it was closed since last invocation. Package heartbeat imports 19 packages (graph) and is imported by 1 packages. Updated 2019-06-13. Refresh now. Tools for package owners.
https://godoc.org/gopkg.in/src-d/go-vitess.v1/vt/vttablet/heartbeat
CC-MAIN-2019-35
refinedweb
489
58.89
NAME sys/resource.h - definitions for XSI resource operations SYNOPSIS #include <sys/resource.h> DESCRIPTION The <sys/resource.h> header shall define the following symbolic constants as possible values of the which argument of getpriority() and setpriority(): PRIO_PROCESS Identifies the who argument as a process ID. PRIO_PGRP Identifies the who argument as a process group ID. PRIO_USER Identifies the who argument as a user ID. The following type shall be defined through typedef: rlim_t Unsigned integer type used for limit values. The following symbolic constants shall be defined: RLIM_INFINITY A value of rlim_t indicating no limit. RLIM_SAVED_MAX A value of type rlim_t indicating an unrepresentable saved hard limit. RLIM_SAVED_CUR A value of type rlim_t indicating an unrepresentable saved soft limit. On implementations where all resource limits are representable in an object of type rlim_t, RLIM_SAVED_MAX and RLIM_SAVED_CUR need not be distinct from RLIM_INFINITY. The following symbolic constants shall be defined as possible values of the who parameter of getrusage(): RUSAGE_SELF Returns information about the current process. RUSAGE_CHILDREN Returns information about children of the current process. The <sys/resource.h> header shall define the rlimit structure that includes at least the following members: rlim_t rlim_cur The current (soft) limit. rlim_t rlim_max The hard limit. The <sys/resource.h> header shall define the rusage structure that includes at least the following members: struct timeval ru_utime User time used. struct timeval ru_stime System time used. The timeval structure shall be defined as described in <sys/time.h> . The following symbolic constants shall be defined as possible values for the resource argument of getrlimit() and setrlimit(): RLIMIT_CORE Limit on size of core file. RLIMIT_CPU Limit on CPU time per process. RLIMIT_DATA Limit on data segment size. RLIMIT_FSIZE Limit on file size. RLIMIT_NOFILE Limit on number of open files. RLIMIT_STACK Limit on stack size. RLIMIT_AS Limit on address space size. The following shall be declared as functions and may also be defined as macros. Function prototypes shall be provided. int getpriority(int, id_t); int getrlimit(int, struct rlimit *); int getrusage(int, struct rusage *); int setpriority(int, id_t, int); int setrlimit(int, const struct rlimit *); The id_t type shall be defined through typedef as described in <sys/types.h> . Inclusion of the <sys/resource.h> header may also make visible all symbols from <sys/time.h>. The following sections are informative. APPLICATION USAGE None. RATIONALE None. FUTURE DIRECTIONS None. SEE ALSO <sys/time.h> , <sys/types.h> , the System Interfaces volume of IEEE Std 1003.1-2001, getpriority(), getrusage(), getrlimit() .
http://manpages.ubuntu.com/manpages/intrepid/man7/sys_resource.h.7posix.html
CC-MAIN-2015-22
refinedweb
414
51.85
IntroductionI've recently switched to Haskell from Python, and I think it might be interesting to look at how far from being Pythonic Haskell actually is - how much of a change did I actually make? Why Haskell?The reason for changing was - having spent half a decade working on large - well, medium-sized these days - concurrent programs, which produce the nastiest bugs I've run into because they are seldom reproducible, the primary source of those bugs seemed to be errors in dealing with shared data objects. Getting the locking wrong, or simply failing to lock something that was shared are often the cause. A number of solutions to the first exist, but the latter seemed to be a language issue. Haskell provided the best solution I found - that a data object may be mutated is part of it's type, so failure to properly deal with them in a concurrent environment can be caught by the compiler. If that type system sounds interesting, you might want to read this article that covers its features. The Zen of PythonIt's generally agreed upon that Tim Peters (with help from others) captured the philosophy behind Python and it's libraries in a document called The Zen of Python. You can read this by running the command python -m this. So I'm going to go through each item, and see how well Haskell adheres to that element of the Python philosophy. Scored on a scale of 1 (complete fail) to 11 (better than Python), with Python scoring a 10. That the scale goes to 11 tells you exactly how serious I am. Beautiful is better than ugly.They say "beauty is in the eye of the beholder, but ugly goes clean to the bone". So I want to delay this one until I've talked a bit more about Haskell to provide a basis for comparison. Explicit is better than implicit: 9I'm tempted to rate this as better than python since mutability is made explicit, but the do statement - with it's implicit parameters wrapping statements - detracts a bit. They are just syntactic sugar, and can be translated back to the explicit form in your head, so it's not a major problem. But they are everywhere! Simple is better than complex: 10Haskell programmers prefer pure code because it is simple. Being lazy is much simpler than yield. Complex is better than complicated: 11Monads very simple things that have very complex effects on the language and programs. They replace a number of things that are complicated in Python. Flat is better than nested: 11Haskell's module system is very similar to Pythons. At the language level, Haskell provides the letand wherestatements, which are a recurring request for Python, because they make managing the top-level namespace easier. Sparse is better than dense: 10Split decisions here. While the language encourages short functions - which leads to sparseness - it also encourages long sequences of combinators or filters, which can lead to dense code in the function. Readability counts.Also deferred to the next section. Special cases aren't special enough to break the rules: 11Haskell has fewer special cases than any other language I've run into. Maybe I just need to look harder? Although practicality beats purity: 6See the previous note. They didn't even special case numeric conversions! Meaning you either have to convert integers to a fractional type to involve them in a division, or write functions that are context sensitive to what they return. Ok, the latter isn't hard, but not enough of the builtins do it. Errors should never pass silently: 9The language doesn't eat errors, and generally makes doing so harder than in python. However, there are multiple ways of handling errors, leading to some confusion. Unless explicitly silenced: 10Yes, you can explicitly silence errors. In the face of ambiguity, refuse the temptation to guess: 11The type system pretty much disallows ambiguity. This carries through to much of the rest of the language. There are even language extensions that allow the programmer to explicitly declare some cases as "not ambiguous." There should be one -- and preferably only one -- obvious way to do it: 2Many functions and operators have multiple names, just to start with. It's not at all uncommon for combinator libraries to have multiple similar combinators, allowing the exact same process to be specified in a combinatorial number of ways. Although that way may not be obvious at first unless you're Dutch. NANot being Dutch, I can't properly evaluate this one. Now is better than never: 7There are a number of areas that still need enterprise-quality libraries. Although never is often better than right now: 7The Haskell Platform - Haskell's answer to "Batteries Included" shows signs of some things being done right now that might better have been done never. If the implementation is hard to explain, it's a bad idea: 9Most implementations are easy to explain as long as you keep in mind that a monad is a monoid in the category of endofunctors. In other words, Haskell programs make heavy use of monads (the language even has special syntax for them), which aren't available in commonly used languages. It's not unusual to wind up with stacks of them, so there are libraries for working with those stacks. Many implementations are easy to explain once you get those. See the simple vs. complex koan. If the implementation is easy to explain, it may be a good idea. 9See above. Namespaces are one honking great idea -- let's do more of those! 6There are no objects. While name spaces nest in let and where functions, you don't have methods. If you want to use two data types that have what would be a method name in common, you either don't use it, or arrange to refer to it by a different (qualified for people who know Haskell) name. Beautiful is better than ugly.And of course, "Readability counts." The first time I saw a Haskell program, my reaction was "That's uglier than Perl". It turns out that Haskell lets programmers define operators. And they do. While this might be a disaster with other languages, the functional nature of Haskell means that a common programming paradigm is a function that takes two functions as arguments and returns a functions that combines them in some way - a combinator. It's much easier to read - or u such things as expressions than as actual function invocations. Even Python libraries recognize this. Things like XPath and regular expressions are generally done with strings holding expressions that are fed to functions to be parsed rather than being broken out into functions that are then combined by other functions. Haskell allows such things to be expressed in Haskell, and hence compiled and checked - including for type - by the compiler. Of course, this means that when you learn a new library, you may have to learn a new set of operators as well as functions. And since some people want to right code left-to-right and others right-to-left, it's not at all uncommon to have three versions of a function: one that applies the left-hand operand first, one that applies the right-hand operand first, and a function name for those who want that. The end result is code that can be beautiful and very readable - once you know the operators involved. But it in no way resembles the "executable pseudo-code" that Python has been described as. So, I'm going to give "beautiful is better than ugly" an 8, because it encourages beautiful code nearly as well as Python does, but doesn't do much to discourage ugly code. On the other hand, "readability counts" gets a 3, because operators tend to provide much less information than function names. SummaryAdding them up, we get that Haskell scores an 8.2. That actually feels about right to me. Of course, you're free to disagree. In particular, some of the koans are subject to interpretation, and you may or may not agree with my interpretation, much less my scoring of Haskell on them. If you disagree strongly, please, publish your own scoring. Or even your scoring of a third language!
http://blog.mired.org/2014/05/how-pythonic-is-haskell.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+MiredInCode+%28Mired+in+code%29
CC-MAIN-2019-13
refinedweb
1,385
62.07
18.2.49. shmem_fence shmem_fence - Provides a separate ordering on the sequence of puts issued by this PE to each destination PE. 18.2.49.1. SYNOPSIS C or C++: #include <mpp/shmem.h> void shmem_fence(void) Fortran: INCLUDE "mpp/shmem.fh" CALL SHMEM_FENCE 18.2.49.2. DESCRIPTION The shmem_fence() routine provides an ordering on the put operations issued by the calling PE prior to the call to shmem_fence() relative to the put operations issued by the calling PE following the call to shmem_fence(). It guarantees that all such prior put operations issued to a particular destination PE are fully written to the symmetric memory of that destination PE, before any such following put operations to that same destination PE are written to the symmetric memory of that destination PE. Note that the ordering is provided separately on the sequences of puts from the calling PE to each distinct destination PE. The shmem_quiet() routine should be used instead if ordering of puts is required when multiple destination PEs are involved. 18.2.49.3. NOTES The shmem_quiet function should be called if ordering of puts is desired when multiple remote PEs are involved. See also intro_shmem(3)
https://docs.open-mpi.org/en/v5.0.x/man-openshmem/man3/shmem_fence.3.html
CC-MAIN-2022-33
refinedweb
198
53.1
Inheri-tance, actually.. you had a typo. ;-) By my standards, there's exactly one reason to use inheritance: to define the shared interface and default behavior for a family of similar classes. The code you gave doesn't quite do that. The default interface for Vehicle is 'pretty much anything'. You've tried to roll your own way of making the interfaces stable, but it doesn't work as well as it could. There's no guarantee Bike and Car will share the same data members (and, by extension, the same access methods), for instance. You've forced the interfaces to be the same by defining the '%valid' hash redundantly, but you'd be better off sinking that information down to the Vehicle class. In fact, I'd suggest getting rid of AUTOLOAD entirely, defining the methods you want explicitly in Vehicle, then scrapping the '%ro' hash and overriding the subclass methods to get the behavior you want: package Vehicle; sub new { my ($type, $data) = @_; my $O = bless $type->_defaults(), $type; # [1] for $k (keys %$O) { $O->{$k} = $data->{$k} if defined ($data->{$k}); # [2] } $O->_sanity_check(); # [3] return $O; } =item new (hash-ref: data) : Vehicle-ref [1] We start from a prototype data structure known to be good. [2] Now we override any values defined by the arguments. We only override values that are already in the prototype, though. We don't want to add anomalous data members by just copying everything straight over. [3] Now we run the new values through a hygiene filter to make sure everything's still good. =cut sub _defaults { return ({ 'wheels' => 0, 'doors' => 0, 'color' => 'none', 'passengers' => 0, }); } =item _defaults (nil) : hash-ref This method takes no input, and returns a pre-filled hash of valid attributes for a given vehicle type. Technically, this is a lobotomized version of the Factory Method design pattern. =cut sub _sanity_check { my $O = shift; if ($O->{'wheels'}) { print STDERR "I ran into a problem.. " . "a generic vehicle shouldn't have " . $O->{'wheels'} . ' ' . "wheels.\n" ; } if ($O->{'doors'}) { print STDERR "I ran into a problem.. " . "a generic vehicle shouldn't have " . $O->{'doors'} . ' ' . "doors.\n" ; } if ('none' ne $O->{'color'}) { print STDERR "I ran into a problem.. " . "a generic vehicle shouldn't be colored " . $O->{'color'} . ".\n" ; } if ($O->{'passengers'}) { print STDERR "I ran into a problem.. " . "a generic vehicle doesn't carry " . $O->{'passengers'} . ' ' . "passengers.\n" ; } return; } =item _sanity_check (nil) : nil This method doesn't take any input or return any value as output, but it does print any errors it sees to STDERR. In a real program, we'd use some kind of trace to see when and where the error occured. =cut sub _access { my ($O, $item, $value) = @_; if (defined ($value)) { $O->{$item} = $value; $O->_sanity_check(); } return ($O->{$item}); } =item _access (item, value) : value This is a generic back-end for the accessor functions. It takes the attribute name and an optional value as input, and returns the item's value as output. I've thrown in a sanity check every time an item's value is changed, just for the sake of paranoia. =cut sub Wheels { return ($_[0]->_access ('wheels', $_[1])); } sub Doors { return ($_[0]->_access ('doors', $_[1])); } sub Color { return ($_[0]->_access ('color', $_[1])); } sub Passengers { return ($_[0]->_access ('passengers', $_[1])); } =item accessor methods These are trivial methods that handle get-and-set operations for the attributes. The fact that _access() does an automatic sanity check after setting any new value means we don't have to put sanity checks in each of these methods.. though we probably would do individual sanity checks in a real application. This is one of those cases where 'lazy' means 'doing lots of work now so we won't have to do even more work later'. =cut package Bike; @ISA = qw( Vehicle ); =item Class Bike This class will override _defaults(), _sanity_check(), and possibly some of the access methods if we want to make 'wheels' always equal 2, for instance: sub Wheels { if ((defined $_[1]) && (2 != $_[1])) { print "You can't do that. I won't let you. So there.\n"; } } =cut package Car; @ISA = qw( Vehicle ); =item Class Car Again, this class will override _defaults(), _sanity_check(), and any access methods we want to harden against changes. =cut [download] In reply to Re: OO Inheritence by mstone in thread OO Inheritence by Limbic~Region A foolish day Just another day Internet cleaning day The real first day of Spring The real first day of Autumn Wait a second, ... is this poll a joke? Results (427 votes), past polls
http://www.perlmonks.org/?parent=356482;node_id=3333
CC-MAIN-2014-15
refinedweb
772
69.52
I have a 4port-ADLS router and dont use a firewall at the moment.. when I do a netstat -a in my prompt window, there are a number of connections that seem unusual.. one that stood out like a sorre thumb is: adsl-20-81-145.sdf.bellsouth.net now .. that HAS to be some dude connected to my comp?? right ?? Another which has come up a lot is 24-240-224-15.charter.com Although I DO use a P2P (kazaa) ... I thought that would open up the different namespace connections under the same port number?? Any how, some answers would be nice and informative thanks FRobinRobin The p2p client is most likely the problem, also check for chat clients. I've noticed that even though most p2p's stick to one port, to get around firewalls and proxies they've since adapted to the capability of using "whatever port it wants". Or at least thats how it seems to those of us admins that need to block theyre use he he.... try killing your p2p and running for a while, then check that'll give you a baseline. ~THEJRC~ I\'ll preach my pessimism right out loud to anyone that listens! I\'m not afraid to be alive.... I\'m afraid to be alone. One thing that would be very helpful (and if you do it, please obscure your address), is to see the entire table. Just from those entries there, it is absolutely impossible to tell whether those are incoming or outgoing connections and on what ports (essential to tell what service is being utilized). If you can supply that information, more people would be able to offer better advice. Two things to keep in mind: 1 ) Any time you run a P2P service, you will have people connecting to you computer, it is the nature of the beast (unless through a firewall or some other means you are able to filter it out) 2) The columns output by netstart, first column is generally the ports/addresses listening on your pc, the second column are the destination/origination ips. If you are concerned about what people are connecting to your PC for, take that port that you see them connecting to (usuallly in the form of IPort) and go somewhere like : And put that port in there and you will see what service they are utilizing (and whether or not you should be worried about it based on the) Yea i think its the p2p it may be that someone is downloading a file from your shared folder do what THEJRC said and kill it and see what happens Also, you can try going to the foundstone site here to download fport to see if it helps you to determine what it is. It will show the application name of the connection possibly. Also, you can look at connections in somewhat realtime using tcpview on the sysinternals site here . The approach previously mentioned in the other posts should help narrow down the possibilities definitely. Take care. Opinions are like holes - everybody\'s got\'em. Smile if your on win xp type netstat -o and it will give you what pid its running on then hit cntl alt delete and find what app is using that pid rioter The best thing is to contact your ISP/admin and tell em the situation. Its always best to keep a firewall and a virus/trojan scanner. There must be a direct connection of the clients with your router. It could be a normal thing as well. With great power comes great responsibility. Forum Rules
http://www.antionline.com/showthread.php?235017-netstat-connections&p=581007
CC-MAIN-2016-40
refinedweb
604
67.38
public class Foo { public void doStuff(int x, String y) { //Foo implementation } } public class Bar extends Foo { public void doStuff(int x, long y) { //Bar implementation } } Method Overriding Rules in Java Carvia Tech | May 26, 2019 | 3 min read | 1,562 views A subclass can override instance methods that it inherits from its superclass. Overriding such a method allows the subclass to provide its own implementation of the method. The below rules must be obeyed while overriding such a method in subclass: The overriding method in subclass should have exactly same method signature as that of overridden method in superclass. Method signature includes - method name, type and number of parameters and the order of arguments. If they don’t then it will result in overloading rather than overriding. Method signature does not comprise the final modifier of the parameter The return type of overriding method must be a covariant type (same class or sub-class), i.e. you can narrow down the return type. The overriding method cannot narrow the accessibility of the method, but it can widen it. You are free to throw any kind of Runtime Exception in the overriding method (But this is not applicable for checked exception) You shall only throw same or narrowed checked exception (same or sub-class) or none at all. methods marked private or final can not be overridden in subclass static methods can not be overridden since they are not instance methods of superclass. If a method is not inherited, then it can not be overridden. For example, private methods in superclass are not inherited. Practice 1. Test the code for overriding rules Check if the below code correctly overrides the methods in superclass? Answer: No The above method does not override since the type of argument is not same as that of superclass method. This is infact an overloading of method. Please be noted that a method can be overloaded in the same type or the subtype. Practice 2. Test the below code for overriding rules Check if the below code correctly overrides the methods in superclass? public class Foo { private void doStuff(int x, String y) { //Foo implementation } } public class Bar extends Foo { public void doStuff(int x, String y) { //Bar implementation } } Answer: No Since private methods are not inherited, so they can not be overridden by subclass method. Practice 3. If a method throws FileNotFoundException in a super Class, can we override this method in sub-class by throwing IOException? - Answer FileNotFoundException extends IOException, and both of these are Checked Exceptions. So As per overriding rules, sub-class method can only narrow down the scope of Exception i.e. overriding method can only throw the same Exception or sub-classes of that Exception. So overriding method can not throw IOException in this case. Practice 4. If a method throws NullPointerException in super class, can we override it with a method which throws RuntimeException, please note that NPE is sub-class of RuntimeException? - Answer Yes, overriding method can throw runtimeException. There is no policy for RuntimeException (unchecked exceptions) in method overriding. overloading rules in Java - Multi-threading Java Interview Questions for Investment Bank - Single Abstract Method (SAM) and Functional Interface in Java - RBS Java Programming Interview Questions - Citibank Java developer interview questions - Sapient Global Market Java Interview Questions and Coding Exercise
https://www.javacodemonk.com/method-overriding-rules-in-java-4d4af2c0
CC-MAIN-2021-21
refinedweb
552
51.28
Type: Posts; User: ergas With F9 put breakpoint on a line in source code where you want your program to stop. With F8 execute to next line. Something similar yes, but not exactly. Double execution in my case was caused by a complex code, from somewhere in the program event was re-fired. My approach was to trace some debug info to a log... From your description I couldn't see how your question is related to VB6. If you have VB6 source, you can add some debug.print information to check an trace how your command button is executed twice. Do you have VB6 SP6 installed? Did you tried something like this? I didn't run the code, but it shoud work with explicit type conversion. Hi Your question is very general. A solution to your problem is somewhere in a way you have opened your recordset. Scenario: Two users opens a recordset from a database. 1st user make... Hi Great Hannes The purpose why I'm experimenting with ActiveX is to convert my old exe project to ActiveX. It's full of global variables and arrays. I can't change that, it would be to... If I understand your problem, you want a timer alway running, not paused by mouse actions. Add another form to your project and make it invisible. Put a timer to this invisible form, so it will... Do you have the latest VB Service Pack installed? I think the last one is SP6. Here is a code that uses FileSystemObject to count files in a folder C:, and to show a list of files. Maybe it can help. Private Sub Form_Load() Call ShowFileList("C:") End Sub ... Hallo Great Hannes Thank's for your example. I have added a module with a public index variable and a public array, so it is now simmilar to my project. With a public array, if you add data... I'm trying ActiveX in my project, but I'm still confused. My ActiveX object is CMyX, and I use that object from my Client app. In one method I have declared 2 ActiveX objects. Then I add some... It's OK with a VB ActiveX variable scope. I have made a stupid error. I have called a method on Form1 from inside a Client project, while I was thinking I'm in Server project. Regards ergas It seems that it is not allowed to have ActiveX control with MDI form inside a MDI project. My sample application converted to MDI, fails with a message "Only one MDI form allowed." In a sample project (attached), it works OK. Project1 is a client. It declares CMyAppA, and calls a TestMethod(). TestMethod() calls a MyTask() method of a Form1. Both client and server have... ActiveX has a lot of global variables, global arrays, global objects. That's why I'm trying to make it ActiveX, and then to use it from "identical" client application as an ActiveX object, with an... I'm surrprissed about ActiveX Form scope. I have two basically the same projects. 1. MyAppA is an ActiveX exe server. 2. MyApp is Standard exe, a client program. I have added a public class... Thanks Keang. I'm new to Java. My problem is to draw sometning in a new window, so I had an idea to use a new class for drawing. Now I have tried a new approach. I create a new JFrame and I... Perhaps there are some mistakes but it isn't important, I can fix it myself later. I only don't know how: 1. Send variables a, b, c from the first class to the second class 2. Create a command on... Ok... I need help with something When I push the button, I need to draw graph of function f(x) = a* x*x + b*x + c in the new window. a, b & c are in the text area. The main problem is how to... Have you installed application uder your sisters profile? Installation may register needed controls. You have defined Names field: Dim names(5) As String but you loop from 0 to 5, that means Names field must be defined as: Dim names(6) As String Check also other fields. Do you want to make a perspective projection of graphs, using trigonometry and VB drawing functions? It is not a simple task. You can imagine your graphs as 3D objects, and you have to make... You have two nested loops: For row = 0 To 5 For col = 0 To 2 ... 'DISPLAY RESULTS lstdisplay.AddItem (names(row) & " " & SCORES(row, col) & ... Next 'ROW I prefer using Scripting object for writing, and I also had an idea to write an explicit NewLine to my string, but in mean time problem was solved and it was not software-related, Problem I was...
http://forums.codeguru.com/search.php?s=b2dcf643e8d2e905fe3ab10678b4138a&searchid=6447939
CC-MAIN-2015-11
refinedweb
806
85.69
Including nonfree modules in OpenCV 3.1 Hi i'm trying to work on a code that use the extra modules of opencv. I've installed the extra modules following the readme txt $ cd < opencv_build_directory > $ cmake -DOPENCV_EXTRA_MODULES_PATH=< opencv_contrib >/modules < opencv_source_directory > $ make -j5 it works but when i try to compile the code it says that error: ‘xfeatures2d’ is not a namespace-name (in the code there is using namespace cv::xfeatures2d; ) thanks did you try to compile one of the offical samples i've just tried surf_matcher.cpp opencv2/xfeatures2d.hpp: file or directory not exist #include "opencv2/xfeatures2d.hpp" ^ compilation terminated. it works fine on these samples but if i try this... it produces a lot of errors
https://answers.opencv.org/question/84599/including-nonfree-modules-in-opencv-31/
CC-MAIN-2019-39
refinedweb
119
64.3
Visual Studio 14 CTP3 is now available, with support for C++11 thread_local, C++11 quick_exit/at_quick_exit, and C++14 sized deallocation. For reference, here's an updated table. (Previous tables: VS 2008 and VS 2010, VS 2010 and VS 2012, VS 2013 and the Nov 2013 CTP (i.e. VS14 CTP0), VS 2013 and VS14 CTP1. "CTP" stands for "Community Technology Preview" and means "alpha".) C++11 Core Language Features VS 2013 VS14 CTP3 Rvalue references Partial Yes ref-qualifiers No Non-static data member initializers Variadic templates Initializer lists static_assert auto Trailing return types Lambdas decltype Right angle brackets Default template args for function templates Expression SFINAE Alias templates Extern templates nullptr Strongly typed enums Forward declared enums Attributes constexpr Alignment Delegating constructors Inheriting constructors Explicit conversion operators char16_t and char32_t Unicode string literals Raw string literals Universal character names in literals User-defined literals Standard-layout and trivial types Defaulted and deleted functions Extended friend declarations Extended sizeof Inline namespaces Unrestricted unions Local and unnamed types as template args Range-based for-loop override and final Minimal GC support noexcept C++11 Core Language Features: Concurrency Reworded sequence points N/A Atomics Strong compare and exchange Bidirectional fences Memory model Data-dependency ordering Data-dependency ordering: attributes exception_ptr quick_exit and at_quick_exit Atomics in signal handlers Thread-local storage Magic statics C++11 Core Language Features: C99 __func__ C99 preprocessor long long Extended integer types C++14 Core Language Features Tweaked wording for contextual conversions Binary literals auto and decltype(auto) return types init-captures Generic lambdas Variable templates Extended constexpr NSDMIs for aggregates Avoiding/fusing allocations [[deprecated]] attributes Sized deallocation Digit separators Also, here's a slide from Herb Sutter outlining what's likely to ship in VS14 RTM (which, as a reminder, is scheduled for 2015): Stephan T. Lavavej Senior Developer - Visual C++ Libraries stl@microsoft.com The Nov 2013 CTP shows "C++TS? await" as partial, what's missing? char16_t and char32_t would be great! It seems such a shame that VC++ developers have to wait a full release cycle for new features to be available, compared to the quick release cycle of the open source compilers. Good work nonetheless. Hope to see more green soon. (I'd also like more detail when the tables state "partial", maybe a footnote would be appropriate). Great, good job. Is there any date we can expect VS 2014 to be released? horeaper: I've asked the dev working on await to comment (I know very little about the feature). Rob G: We're releasing these alphas at a higher frequency than before. You can help us by trying them out and reporting any issues you find - it's way easier to fix bugs now, compared to right before RTM. See blogs.msdn.com/.../c-11-14-feature-tables-for-visual-studio-14-ctp1.aspx for detailed explanations of the Partial entries. Kris: 14 is a version number, not a year. VS14 RTM is planned for 2015. horeaper: re: await (partial) in Nov 2013 CTP await that we shipped in Nov 2013 CTP had several limitations: 1. It was only working with non-standard concurrency::task type 2. It was using windows fibers in its implementation that limited its scalability and interoperability In VS14 we are addressing these limitations and submitting an updated proposal for C++TS. This is idiotic and insulting in the extreme. There is no reason to tie the IDE to the C++ compiler. Your customers KNOW you are lying when you claim a tight tie between the IDE and the C++ compiler. Even more insulting is how oblivious you are to the impact of changing everything rather than just the compiler. Then again, you all live in a very weird cocoon divorced from reality. Amazing work guys ! Keep on going, it looks amazing. Inherited ctors is very useful Thanks for the update. Good to see things progressing steadily. Thank you How about C features? Is there a similar list for C99 and C11 too? @Joe: Customers can't know VC team is lying, because there is no lie. Or present evidence. Same would go for anybody claiming "lies". Can you please confirm if this issue is fixed: connect.microsoft.com/.../c99-visual-studio-2013-update-3-fixed-1-2-of-the-union-issue Installed 2014 and tested, it still exists!!!!!!!!!!! PLEASE fix it. Thanks for setting the version number straight! (no-more-one-version-lag like VS2013's directory was VC12... now 14 is 14 please don't mess with it) @STL, Can you please briefly explain the difference between "Standard Library features" and "Core Language features", which of them supersedes the other and at what point we can say certain version of language is fully supported? Jack of Jacks: I've asked the compiler team to comment about C99/C11 Core Language features. I'm not aware of any having been implemented since 2013 RTM (which, as I mentioned in blogs.msdn.com/.../c-11-14-stl-features-fixes-and-breaking-changes-in-vs-2013.aspx , supported C99 _Bool, compound literals, designated initializers, and variable declarations), but my knowledge is limited - since I work on the C++ Standard Library, I rarely interact with the C Core Language. I know more about the C99 Standard Library (incorporated into C++11/14 by reference). Our support is now complete, including <uchar.h>/<cuchar> (but with char16_t/char32_t being simulated in C++ mode; we're working on it), with the sole exception of <tgmath.h>. (We have <ctgmath> for C++11.) TomP: While we can't promise when or how a fix will ship, I can tell you that the compiler dev working on your bug marked it (in our internal database) as Testing Fix on Fri 8/22, so things are looking good. Also, I mentioned this in my post and my first comment, but I'll say it again in my Big Voice: VS14 RTM is planned for 2015.
http://blogs.msdn.com/b/vcblog/archive/2014/08/21/c-11-14-features-in-visual-studio-14-ctp3.aspx
CC-MAIN-2015-48
refinedweb
994
54.63
24 October 2014 41 comments Python, Go tl;dr; It's not a competition! I'm just comparing Go and Python. So I can learn Go. So recently I've been trying to learn Go . It's a modern programming language that started at Google but has very little to do with Google except that some of its core contributors are staff at Google. The true strength of Go is that it's succinct and minimalistic and fast. It's not a scripting language like Python or Ruby but lots of people write scripts with it. It's growing in popularity with systems people but web developers like me have started to pay attention too. The best way to learn a language is to do something with it. Build something. However, I don't disagree with that but I just felt I needed to cover the basics first and instead of taking notes I decided to learn by comparing it to something I know well, Python. I did this a zillion years ago when I tried to learn ZPT by comparing it DTML which I already knew well. My free time is very limited so I'm taking things by small careful baby steps. I read through An Introduction to Programming in Go by Caleb Doxey in a couple of afternoons and then I decided to spend a couple of minutes every day with each chapter and implement something from that book and compare it to how you'd do it in Python. I also added some slightly more full examples, Markdownserver which was fun because it showed that a simple Go HTTP server that does something can be 10 times faster than the Python equivalent. Go is very unforgiving but I kinda like it. It's like Python but with pyflakes switched on all the time. Go is much more verbose than Python. It just takes so much more lines to say the same thing. Goroutines are awesome. They're a million times easier to grok than Python's myriad of similar solutions. In Python, the ability to write to a list and it automatically expanding at will is awesome. Go doesn't have the concept of "truthy" which I already miss. I.e. in Python you can convert a list type to boolean and the language does this automatically by checking if the length of the list is 0. Go gives you very few choices (e.g. there's only one type of loop and it's the for loop) but you often have a choice to pass a copy of an object or to pass a pointer. Those are different things but sometimes I feel like the computer could/should figure it out for me. I love the little defer thing which means I can put "things to do when you're done" right underneath the thing I'm doing. In Python you get these try: ...20 lines... finally: ...now it's over... things. The coding style rules are very different but in Go it's a no brainer because you basically don't have any choices. I like that. You just have to remember to use gofmt. Everything about Go and Go tools follow the strict UNIX pattern to not output anything unless things go bad. I like that. godoc.org is awesome. If you ever wonder how a built in package works you can just type it in after godoc.org like this godoc.org/math for example. You don't have to compile your Go code to run it. You can simply type go run mycode.go it automatically compiles it and then runs it. And it's super fast. go get can take a url like github.com/russross/blackfriday and just install it. No PyPI equivalent. But it scares me to depend on peoples master branches in GitHub. What if master is very different when I go get something locally compared to when I run go get weeks/months later on the server? UPDATE Here's a similar project comparing Python vs. JavaScript by Ilya V. Schurov Follow @peterbe on Twitter For versioning dependencies check out gopkg.in Thanks! I had no idea about this. I haven't read it yet but skimming it lightly it seems like there *is* a solution. Please find this program .. may be this will be the solution....These are related to map reduce .... import MapReduce import sys """ Word Count Example in the Simple Python MapReduce Framework """ mr = MapReduce.MapReduce() # ============================= # Do not modify above this line def mapper(record): # key: document identifier # value: document contents trim_nucleotid = record[1][:-10] mr.emit_intermediate(trim_nucleotid, 1 ) def reducer(trim_nucleotid, list_of_values): # key: word # value: list of occurrence counts #mr.emit((person,len(list_of_values)) ) mr.emit(trim_nucleotid) # Do not modify below this line # ============================= if __name__ == '__main__': inputdata = open(sys.argv[1]) mr.e xecute(inputdata, mapper, reducer) What is this related to? You might want to look into gopkg.in and/or vendoring when it comes to managing external dependencies. Some package managers Great article That's amazing! That there are so many already. Packaging a hard thing to get right. It's not hard like brain surgery but it's hard as like in carpentry. I haven't tried Go yet so thanks for the summary of your experience. I'm not convinced there's a strong use case for "defer" as I'd think if the block of code is large enough that a finally clause is "too far down" then it's already overly complex and using "defer" repeatedly throughout would make it worse. I could be convinced otherwise. That said, you could easily make a Python context manager with similar semantics. You'd do "with deferred() as x:" and then could have x.defer(func, *args) throughout, and it would work in similar fashion at the end of the block. Might be useful some day. Actually, I'm going to change my mind already. In some cases you have to set up a whole series of things, but in the finally clause you can't blindly tear down each of them. You need to check whether the code got to the setup (or raised an exception beforehand). The defer approach does seem like it would simplify that case as you could dispense with a bunch of conditionals in the finally clause. Hmmm... (goes to find code where this may be cleaner) Yeah, a context manager and its __exit__ is maybe a good alternative to the `finally:` but it's a mess. I loooove Python but let's admit; Go wins this round :) I don't agree. Perhaps it was different in 2014-10, but contextlib.contextmanager makes it now quite simple to use a with statement. The only drawback I see is the extra indent, but in the end it is not that significant I find. And godoc -http=:8080 is even cooler … simply open, and you get the documentation for builtin packages like math, external like github.com/russross/blackfriday and $GOPATH/yourcoolstuff in one place, linked nicely :) That's neat! That's basically what godoc.org is plus your own $GOPATH stuff. Same exact thing on python can be done by `pydoc -p 8080` Interesting. I also really wanted to like go but the language feels so unexpressive and weak when compared to python (interface{} anyone? No operator overloading? Return value as exception?) I got much happier after trying out rust, proving that new languages don't have to feel awkward and tedious to use. As someone who has used a multitude of languages ranging from FORTRAN to C/C++/Objective-C to C# to Java to Smalltalk and Python, I do not feel limited in Go with respect to expressiveness. Language design is always a balance between power and simplicity, and I've explored both ends of the spectrum. I'll take the simpler languages (such as Go, Python and Smalltalk) any day. Funnily, I've never used operator overloading and I don't miss it one iota. And there are pros and cons to using exceptions. I like the simplicity of Go's error handling. Thanks for this super-concise summary! I have a similar take on it: The main thing I'm missing, since I am doing a bunch of work with AWS, is something as mature as Boto. There are a number of "goaws" libraries that are interrelated forks, and some others, but nothing with the features of Boto... yet. "I can only hope that the Go community likes Belgian beer and single malt whisky as much as the Python community. :-)" Yeah. That's crucial. Only time will tell. I am currently learning Go coming from a C, Scheme, C#, Julia and Python background and I can say I am liking Go so far. I would like to correct you on your assertion that Go only has for loops. Go only has for loops token-wise but syntactically for has multiple uses. You can write it like in C, as a while loop and I am not yet sure about the foreach format but I will look it up. Thanks. I've only ever seen `for` to start a loop. No `while` or `do while`. I haven't seen the foreach thing yet. For loop is used instead of any loop but foreach. Range substitutes foreach. Have a look at for loop you will see. Just a little self-promotion you may find interesting. I built goless at PyCon last year, which provides go-like semantics for writing concurrent programs in Python. Maybe it helps scratch an itch and keep you in Python land. That. Is awesome. I tried to soak in the examples so that when I have a need I'll remember that it's been done there. You can install python packages from git directly as well pip install git+ssh://git@github.com/my-user/my-repo.git i think that there might be a way to install a particular commit / branch as well, although i've never done that You can. I only know how to do it for an archive. E.g. pip install > try: ...20 lines... finally: ...now it's over.. In python you have with statement, which is clearly better: defer will only work after function return, while using with you can be sure actions are done after leaving with block. Oh man. I just looked over your whole list of py vs. go example codes. I dunno how aesthetic wasn't mentioned! I've been using go just because I'm pretty sure it's going to replace most modern languages in my life, but when it comes to comparing how the two languages read, python hands down. Go has some of the ugliest looking code I've ever seen. And nothing is ever that obvious about what it's doing. Granted, I'll get used to it, it still wastes tons of precious cognitive processing on damn near everything it does. Still, it has pointers and go routines, so I'm sold. I just wish it wasn't so ugly. I mean, I get it. "fmt" is what prints the line. I'm glad I get to say that explicitly. But the day I decide to use something else, I'm probably going to want to use it all the way throughout, and then what? I have to replace "fmt" everywhere it is. And I'll probably never need anything else, so it's all the more reason "fmt.println" sucks. If go just had the ability to allow "fmt" to define "print" as "put the following into fmt.println()", it would clean the code up immensely. There could even be a little golang.list_used_keywords(). So you could check what keywords were being used. Or something. The point I'm trying to get at is Go is great in terms of functionality, but lacks form. And this is the future, we can do both now. As long as we keep doing one over the other, we'll never stop being the apes of yesterday. Hello Peter, I need some advice, I build a program in vb.net (Yes that is still a language). It uses a lot of timers in it. Example: timerTEST.interval = 1000 '(Miliseconds) timerTEST.start TimerTEST.DoSomething() Now I'm searching for what programming language I should convert to, so that my program can be used cross-platform. I'm stuck between GO and Python. Can you give a example which basically does the same as the above mentioned code? It'd be great if you could do that in both languages. Thanks for the brief explanation. But I'm still stuck :p Sorry for my English, I'm not a native speaker. Best regards, berm Already there Haha, awesome! Thanks! Hi Peter, First off, Thanks for the great writeup. I myself have been coding in python for over 7 years now and trying to learn and switch to go. I also think python code is much compact than go like list and dictionary comprehensions and lambda anonymous function. Also, I am a little spectacle about installing dependencies, even though I read somewhere that distribution in go is much easier than in python as it creates a single binary. Package cx_freeze makes stanalone python programs to you get a folder with dependecy and a exe file, nothing about python have to be install att target machine, i use this a lot to distribute my python programs to production. Hi, I'm a network engineer who has rarely had to script or program. I'm trying to learn a programming / scripting language and have begun some online Python courses and bought a couple of books. After reading this I'm wondering if I should focus on go instead of Python. Any sage advice for a programming novice? Thanks ! The advantage with Go is that finished programs are faster. But they take longer to write. So pick based on that "metric". Python it is. Thanks start to learn go, struggling. python is easy to learn to use. For my purposes anyway, until the Go standard library includes tools for parsing arbitrary well-formed XML it wont be able to replace Python. Thanks for sharing your thoughts. It's a very useful summary. One thing I can't let go: saying python is a scripting language technically isn't necessary true; in most practical implementation (Cpython, pypy) it's at least compiled to byte code and only then interpreted. Being a dynamically typed language python enables more complex OO programming than known from Cpp or even java. I was hoping to see an explanation of each language rather than a list of beneficial details. Namely, "use Python if your goal is ____", "use Go if your goal is ____", i.e. reasons to use each tool -- each language's purpose, so to speak. Presently I'm thinking Go is for multiple people writing a program (i.e. code can be written in modules, sections) to be executed by multiple computers, whereas Python is for one person's scientific data analysis and programs to be run by a single computer. Would you agree with this characterization? The best part about Go is to deal with concurrent program and faster. So if you involved in concurrent program, it's a good choice. But since it's more like C program, also there are a lot of limitation compared with Python to do data analysis. however, it really depends upon what you want to do, then decide what language to choose. I was surprised at how slow the Go implementation of your webserver was, until I noticed you weren't using KeepAlive. When I turned that on (via -k to the ab tool), the Go implementation ran about 4x faster. The tornado implementation ran about 1.5x faster, and flask ran about the same. Without KeepAlive, each request creates a new TCP connection, so a lot of extra time gets spent in the kernel. Presumably that's about the same amount of overhead for all of the implementations, so it's better to factor it out. I would have tested the nodejs version you mentioned, but you didn't include the source. Although you did include source for a falcon-based implementation, so maybe that's what you actually tested instead of nodejs? The performance I saw of falcon vs. the others would fit the same relative results you got with "Node (express)". Here are the results I got: Thanks for the comparison! Update! I found the nodejs source in your git repo and updated my Gist with test results from it. I'll file an issue in the repo for the lack of nodejs source on the main post.
https://www-origin.peterbe.com/plog/govspy
CC-MAIN-2019-51
refinedweb
2,805
74.9
Archives High Interface diet As long as I'm on the coding conventions track (as many of users have commented on my last post), I'll talk about a small coding convention I am trying to do. Coding Pet Peeves Any experienced coder has pet peeves when it comes to reading other people's code or writing code. It might be that you don't like regions or that every method should have comments. But here are my two biggest pet peeves when it comes to C#. CSS Sprite for ASP.NET CSS sprites are becoming popular as a way to increase application performance by eliminating HTTP requests by the client to the web server. It also serves as a path for better cache management. I will need to go through a bit of background before we start (sorry... we'll get to the generation in a second!). What is a CSS Sprite? A CSS sprite is a fancy name for an image that is composed other images. In your CSS you have this one base image, but you pick the image inside of the image. It does what you expect, to just show that one single image. The following is an example of a CSS sprite: In your CSS file, you would pick this base image as your background-image and then to pick a particular image, you could need to specify the height, width, and also the position of where the image starts... background-position. Long story short, it cuts down on the number of images that need to be passed from your server to the client that is visiting the site. This adds up over time, especially is you have A LOT of images. A lot of people go to a sprite generator online. Those are nice, but what happens when you need to change an image? You have to go back and re-upload all your images and recalculate your background-positions. That's not a good thing if you are following DRY (don't repeat yourself). Plus it's hard work NO ONE should have to do. How I handle output caching What I do is this: all my CSS files, JavaScript files, etc have a common ID which is basically what I call the "Distro ID" which is generated at the beginning of my ASP.NET applications so every time you do a publish, the number will change and your files will also change to their updated content. The heavy lifting is all done in the Global.asax file. 1: //C# 2: public class Global : System.Web.HttpApplication { 3: public static string DistributionNumber { get; set; } 4: 5: protected void Application_Start(object sender, EventArgs e){ 6: SetDistro(); 7: .. 8: } 9: private void SetDistro() { 10: DistributionNumber = DateTime.Now.Year.ToString() + DateTime.Now.Month.ToString() + DateTime.Now.Day.ToString() + DateTime.Now.Hour.ToString() + DateTime.Now.Minute.ToString() + DateTime.Now.Second.ToString(); 11: } 12: } 1: 'VB 2: Inherits System.Web.HttpApplication 3: 4: Private m_distronum as String = String.Empty 5: Public Shared Property DistributionNumber() As String 6: Get 7: Return m_distronum 8: End Get 9: Set(ByVal value as String) 10: m_distronum = value 11: End Set 12: End Property 13: 14: Sub Application_Start(ByVal sender As Object, ByVal e As EventArgs) 15: SetDistro() 16: .. 17: End Sub 18: Private Sub SetDistro() 19: DistributionNumber = DateTime.Now.Year.ToString() + DateTime.Now.Month.ToString() + DateTime.Now.Day.ToString() + DateTime.Now.Hour.ToString() + DateTime.Now.Minute.ToString() + DateTime.Now.Second.ToString() 20: End Sub Now since I am mostly using ASP.NET MVC, I have a controller for the Distribution. So my URLs look like this: [BASE_URL]/20080303010101/CSS. I ago ahead and create separate Routes for this, but the default would also work. The ID that is passed to the CSS, JavaScript, Sprite is the distro number. To do the output caching, you need a separate ID so that the browser can differentiate one distribution's CSS, JS, etc from another. This way, we can client side cache with no worries. If you were using Web Forms, I would suggest using separate Generic handlers (one for CSS, JS, and the Sprite). They would accomplish the same thing. Just remember that you need to take in an ID as a Query String for the Client-side cache to be safe for your users. If the ID is not passed to the controller OR the generic handler (whichever you are using), don't client-side cache. But this is something you will want to offer as it, too, saves you in the performance department. Back to the Sprite generator... get me some images! You have 2 options when it comes to getting images. You have the option of enumerating through a folder(s) or putting the URLs in by hand. I suggest enumeration if you have the same file type. If you put in the file names by hand, that can get tricky... which I'm sure you can tell why. In my Sprite generator, I process the CSS and replace the images with the sprite reference... so we don't need the URL of the original images anyway's. All that we need to reference the images is the name (lowercase). As I showed you in my last post about CSS Minification (which we will refer back to), I have a file enumerator method: 1: //C# 2: public static IList<System.IO.FileInfo> GetFiles(string serverPath, string extention) 3: { 4: if (!serverPath.StartsWith("~/")) 5: { 6: if (serverPath.StartsWith("/")) 7: serverPath = "~" + serverPath; 8: else 9: serverPath = "~/" + serverPath; 10: } 11: string path = HttpContext.Current.Server.MapPath(serverPath); 12: 13: if (!path.EndsWith("/")) 14: path = path + "/"; 15: 16: if (!Directory.Exists(path)) 17: throw new System.IO.DirectoryNotFoundException(); 18: 19: IList<FileInfo> files = new List<FileInfo>(); 20: 21: string[] fileNames = Directory.GetFiles(path, "*." + extention, System.IO.SearchOption.AllDirectories); 22: 23: foreach (string name in fileNames) 24: files.Add(new FileInfo(name)); 25: 26: return files; 27: } 1: 'VB 2: Public Shared Function GetFiles(ByVal serverPath As String, ByVal extention As String) As IList(Of System.IO.FileInfo) 3: If Not serverPath.StartsWith("~/") Then 4: If serverPath.StartsWith("/") Then 5: serverPath = "~" + serverPath 6: Else 7: serverPath = "~/" + serverPath 8: End If 9: End If 10: Dim path As String = HttpContext.Current.Server.MapPath(serverPath) 11: 12: If Not path.EndsWith("/") Then 13: path = path + "/" 14: End If 15: 16: If Not Directory.Exists(path) Then 17: Throw New System.IO.DirectoryNotFoundException() 18: End If 19: 20: Dim files As IList(Of FileInfo) = New List(Of FileInfo)() 21: 22: Dim fileNames As String() = Directory.GetFiles(path, "*." + extention, System.IO.SearchOption.AllDirectories) 23: 24: For Each name As String In fileNames 25: files.Add(New FileInfo(name)) 26: Next 27: 28: Return files 29: End Function If you were going the hand-coded image URLs, you would just need to have a method that return a List of FileInfo, just like the method from above. Get the SPRITE already! We need to specify a few data models so that they can contain the individual images and the sprites. They aren't anything special... just basically containers for the bitmap object (SpriteImage) and a Collection of the images for the Sprite generation (SpriteImageCollection). The collection is actually the generator of the sprite. The GetSprite() method is the method that will do the image concatenating for you. 1: //C# 2: public class SiteImage 3: { 4: public SiteImage(FileInfo fileinfo) 5: { 6: Stream stream = new FileStream(fileinfo.FullName, FileMode.Open, FileAccess.Read); 7: Img = (Bitmap)Bitmap.FromStream(stream); 8: File = fileinfo; 9: } 10: 11: public int Width { 12: get { return Img.Width; } 13: } 14: public int Height { 15: get { return Img.Height; } 16: } 17: public int YValue { get; set; } 18: public Bitmap Img { get; set; } 19: public FileInfo File { get; set; } 20: } 21: 22: public class SiteImageCollection : List<SiteImage> 23: { 24: public int MaxWidth 25: { 26: get 27: { 28: int largest = 0; 29: 30: foreach (SiteImage s in this) 31: if (s.Width > largest) 32: largest = s.Width; 33: 34: return largest; 35: } 36: } 37: 38: public int TotalHeight 39: { 40: get 41: { 42: int ttl = 0; 43: foreach (SiteImage s in this) 44: ttl += s.Height; 45: return ttl; 46: } 47: } 48: 49: public Bitmap GetSprite() 50: { 51: Bitmap spriteImg = new Bitmap(MaxWidth, TotalHeight); 52: 53: Graphics sprite = Graphics.FromImage(spriteImg); 54: 55: int curY = 0; 56: foreach (SiteImage si in this) 57: { 58: sprite.DrawImage(si.Img, 0, curY); 59: si.YValue = curY; 60: curY += si.Height; 61: } 62: 63: return spriteImg; 64: } 65: } 1: 'VB 2: Public Class SiteImage 3: Public Sub New(ByVal fileinfo As FileInfo) 4: Dim stream As Stream = New FileStream(fileinfo.FullName, FileMode.Open, FileAccess.Read) 5: Img = DirectCast(Bitmap.FromStream(stream), Bitmap) 6: File = fileinfo 7: End Sub 8: 9: Public ReadOnly Property Width() As Integer 10: Get 11: Return Img.Width 12: End Get 13: End Property 14: 15: Public ReadOnly Property Height() As Integer 16: Get 17: Return Img.Height 18: End Get 19: End Property 20: 21: Private m_yval As Integer 22: Public Property YValue() As Integer 23: Get 24: Return m_yval 25: End Get 26: Set(ByVal value As Integer) 27: m_yval = value 28: End Set 29: End Property 30: 31: Private m_Img As Bitmap 32: Public Property Img() As Bitmap 33: Get 34: Return m_Img 35: End Get 36: Set(ByVal value As Bitmap) 37: m_Img = value 38: End Set 39: End Property 40: 41: Private m_file As FileInfo 42: Public Property File() As FileInfo 43: Get 44: Return m_file 45: End Get 46: Set(ByVal value As FileInfo) 47: m_file = value 48: End Set 49: End Property 50: End Class 51: 52: Public Class SiteImageCollection 53: Inherits List(Of SiteImage) 54: Public ReadOnly Property MaxWidth() As Integer 55: Get 56: Dim largest As Integer = 0 57: 58: For Each s As SiteImage In Me 59: If s.Width > largest Then 60: largest = s.Width 61: End If 62: Next 63: 64: Return largest 65: End Get 66: End Property 67: 68: Public ReadOnly Property TotalHeight() As Integer 69: Get 70: Dim ttl As Integer = 0 71: For Each s As SiteImage In Me 72: ttl += s.Height 73: Next 74: Return ttl 75: End Get 76: End Property 77: 78: Public Function GetSprite() As Bitmap 79: Dim spriteImg As New Bitmap(MaxWidth, TotalHeight) 80: 81: Dim sprite As Graphics = Graphics.FromImage(spriteImg) 82: 83: Dim curY As Integer = 0 84: For Each si As SiteImage In Me 85: sprite.DrawImage(si.Img, 0, curY) 86: si.YValue = curY 87: curY += si.Height 88: Next 89: 90: Return spriteImg 91: End Function 92: End Class There you have it. A nice sprite generated from existing images. The cool thing is that if you are working with PNGs, your transparencies are preserved, which is what I really needed. The only real down side to this is that all your images are lined up vertically. I'm not quite sure that this matters, but most other generators have the ability to somehow line the images up vertically and horizontally. But you're not exactly done... You still have the CSS to deal with... CSS sprites don't have CSS in the name for nothing. They are best used in a CSS files. Lucky for me, I already have a process for my CSS files to go through (which includes minification, cache, etc). All I need to do is augment it to replace the image reference to the actual URL of the sprite and to calculate the background-position. Logic Class I really like having Logic classes (which you have seen in my past blog posts) because they are a pivotal point between the server-side cache mechanisms that I put into place. Here's what my Logic controller looks like: 1: //C# 2: public class CssSprite 3: { 4: public static SiteImageCollection GetImagesForSprite() 5: { 6: SiteImageCollection col = Helpers.UrlMapping.UrlHandler.Images.AllImages(); 7: //we need to grab the sprite so that the Y-values are usable 8: Bitmap sprite = col.GetSprite(); 9: return col; 10: } 11: 12: public static Bitmap GetCachedSprite() 13: { 14: ICache<SiteImageCollection> cache = new Helpers.Cache.CssSpritCache(); 15: 16: return cache.Get().GetSprite(); 17: } 18: 19: public static string CleanCSS(string currentCSS) 20: { 21: string newCSS = currentCSS; 22: foreach(SiteImage si in CssSprite.GetImagesForSprite()) 23: { 24: 25: string orig = "background-image: url(" + si.File.Name.ToLower() + ");"; 26: 27: newCSS = newCSS.Replace(orig + "/**/", ReplacementCss(si, false)); 28: newCSS = newCSS.Replace(orig, ReplacementCss(si, true)); 29: } 30: return newCSS; 31: } 32: 33: private static string ReplacementCss(SiteImage si, bool widthheight) 34: { 35: string rep = "background-image: url(" + Helpers.UrlMapping.UrlHandler.Images.Sprite().ToAbsoluteURL() + ");"; 36: rep += "background-position: 0px -" + si.YValue + "px;"; 37: if (widthheight) 38: { 39: rep += "width: " + si.Width + ";"; 40: rep += "height: " + si.Height + ";"; 41: } 42: return rep; 43: } 44: } 1: 'VB 2: Public Class CssSprite 3: Public Shared Function GetImagesForSprite() As SiteImageCollection 4: Dim col As SiteImageCollection = Helpers.UrlMapping.UrlHandler.Images.AllImages() 5: 'we need to grab the sprite so that the Y-values are usable 6: Dim sprite As Bitmap = col.GetSprite() 7: Return col 8: End Function 9: 10: Public Shared Function GetCachedSprite() As Bitmap 11: Dim cache As ICache(Of SiteImageCollection) = New Helpers.Cache.CssSpritCache() 12: 13: Return cache.[Get]().GetSprite() 14: End Function 15: 16: Public Shared Function CleanCSS(ByVal currentCSS As String) As String 17: Dim newCSS As String = currentCSS 18: For Each si As SiteImage In CssSprite.GetImagesForSprite() 19: 20: Dim orig As String = "background-image: url(" + si.File.Name.ToLower() + ");" 21: 22: newCSS = newCSS.Replace(orig + "/**/", ReplacementCss(si, False)) 23: newCSS = newCSS.Replace(orig, ReplacementCss(si, True)) 24: Next 25: Return newCSS 26: End Function 27: 28: Private Shared Function ReplacementCss(ByVal si As SiteImage, ByVal widthheight As Boolean) As String 29: Dim rep As String = "background-image: url(" + Helpers.UrlMapping.UrlHandler.Images.Sprite().ToAbsoluteURL() + ");" 30: rep &= "background-position: 0px -" + si.YValue + "px;" 31: If widthheight Then 32: rep &= "width: " + si.Width + ";" 33: rep &= "height: " + si.Height + ";" 34: End If 35: Return rep 36: End Function 37: End Class So what this does is replace the values where the background-image is equal to an image in the sprite's name. It then appends the style definition to add the background-position (which we found during the Sprite generation) and the height and width and changes the background-image URL. If you add a /**/ right before the background-image definition, the height and width are not added in. The only thing that I didn't really show you was the AllImages() method (which returns a SiteImageCollection on the images I need sprited) and the Sprite() method in my UrlHandler. That's not that important... I'm sure you can figure out what those do :) Cache mechanism When we last talked about my CacheBase class, it was sort of all over the place. I have revamped it to use delegates so that we don't get any weird methods that don't make sense. It's still a little messy since I needed to write 2 classes, one that took 1 generic and another that took 2. Here it is: 1: //C# 2: public abstract class CacheBase<T> : ICache<T> 3: { 4: public abstract Func<T> Method { get; } 5: public abstract string CacheKey { get; } 6: public abstract CacheItemPriority Priority { get; } 7: public abstract TimeSpan CacheDuration { get; } 8: 9: public T Get() { 10: T CurValue = ((T)HttpContext.Current.Cache[CacheKey]); 11: 12: if (CurValue == null) 13: CurValue = Invoke(); 14: 15: return CurValue; 16: } 17: 18: /// <summary> 19: /// Removes the Cache Object from the 20: /// current cache. 21: /// </summary> 22: public void Delete(){ 23: HttpContext.Current.Cache.Remove(CacheKey); 24: } 25: 26: private T Invoke(){ 27: return Method.Invoke(); 28: } 29: 30: /// <summary> 31: /// Adds the value into the Cache 32: /// </summary> 33: /// <param name="Value">Value of T</param> 34: internal T Insert(T Value) 35: { 36: HttpContext.Current.Cache.Add(CacheKey, Value, null, DateTime.Now.Add(CacheDuration), TimeSpan.Zero, Priority, null); 37: return Value; 38: } 39: } 40: 41: public abstract class CacheBase<T, P1> : ICache<T> 42: { 43: public abstract Func<P1, T> Method { get; } 44: public abstract P1 ObjectDescripter { get; } 45: public abstract string CacheKey { get; } 46: public abstract CacheItemPriority Priority { get; } 47: public abstract TimeSpan CacheDuration { get; } 48: 49: public T Get(){ 50: T CurValue = ((T)HttpContext.Current.Cache[CacheKey]); 51: 52: if (CurValue == null) 53: CurValue = Invoke(); 54: 55: return CurValue; 56: } 57: 58: /// <summary> 59: /// Removes the Cache Object from the 60: /// current cache. 61: /// </summary> 62: public void Delete(){ 63: HttpContext.Current.Cache.Remove(CacheKey); 64: } 65: 66: private T Invoke(){ 67: return Method.Invoke(ObjectDescripter); 68: } 69: 70: /// <summary> 71: /// Adds the value into the Cache 72: /// </summary> 73: /// <param name="Value">Value of T</param> 74: internal T Insert(T Value){ 75: HttpContext.Current.Cache.Add(CacheKey, Value, null, DateTime.Now.Add(CacheDuration), TimeSpan.Zero, Priority, null); 76: return Value; 77: } 78: 79: } 1: 'VB 2: Public MustInherit Class CacheBase(Of T) 3: Implements ICache(Of T) 4: Public MustOverride ReadOnly Property Method() As Func(Of T) 5: Public MustOverride ReadOnly Property CacheKey() As String 6: Public MustOverride ReadOnly Property Priority() As CacheItemPriority 7: Public MustOverride ReadOnly Property CacheDuration() As TimeSpan 8: 9: Public Function [Get]() As T 10: Dim CurValue As T = (DirectCast(HttpContext.Current.Cache(CacheKey), T)) 11: 12: If CurValue Is Nothing Then 13: CurValue = Invoke() 14: End If 15: 16: Return CurValue 17: End Function 18: 19: ''' <summary> 20: ''' Removes the Cache Object from the 21: ''' current cache. 22: ''' </summary> 23: Public Sub Delete() 24: HttpContext.Current.Cache.Remove(CacheKey) 25: End Sub 26: 27: Private Function Invoke() As T 28: Return Method.Invoke() 29: End Function 30: 31: ''' <summary> 32: ''' Adds the value into the Cache 33: ''' </summary> 34: ''' <param name="Value">Value of T</param> 35: Friend Function Insert(ByVal Value As T) As T 36: HttpContext.Current.Cache.Add(CacheKey, Value, Nothing, DateTime.Now.Add(CacheDuration), TimeSpan.Zero, Priority, _ 37: Nothing) 38: Return Value 39: End Function 40: 41: End Class 42: 43: Public MustInherit Class CacheBase(Of T, P1) 44: Implements ICache(Of T) 45: Public MustOverride ReadOnly Property Method() As Func(Of P1, T) 46: Public MustOverride ReadOnly Property ObjectDescripter() As P1 47: Public MustOverride ReadOnly Property CacheKey() As String 48: Public MustOverride ReadOnly Property Priority() As CacheItemPriority 49: Public MustOverride ReadOnly Property CacheDuration() As TimeSpan 50: 51: Public Function [Get]() As T 52: Dim CurValue As T = (DirectCast(HttpContext.Current.Cache(CacheKey), T)) 53: 54: If CurValue Is Nothing Then 55: CurValue = Invoke() 56: End If 57: 58: Return CurValue 59: End Function 60: 61: ''' <summary> 62: ''' Removes the Cache Object from the 63: ''' current cache. 64: ''' </summary> 65: Public Sub Delete() 66: HttpContext.Current.Cache.Remove(CacheKey) 67: End Sub 68: 69: Private Function Invoke() As T 70: Return Method.Invoke(ObjectDescripter) 71: End Function 72: 73: ''' <summary> 74: ''' Adds the value into the Cache 75: ''' </summary> 76: ''' <param name="Value">Value of T</param> 77: Friend Function Insert(ByVal Value As T) As T 78: HttpContext.Current.Cache.Add(CacheKey, Value, Nothing, DateTime.Now.Add(CacheDuration), TimeSpan.Zero, Priority, _ 79: Nothing) 80: Return Value 81: End Function 82: 83: End Class So the Sprite cache looks like this: 1: //C# 2: public class CssSpritCache : CacheBase<SiteImageCollection> 3: { 4: public CssSpritCache() 5: { 6: } 7: 8: public override Func<SiteImageCollection> Method 9: { 10: get { return new Func<SiteImageCollection>(Logic.CssSprite.GetImagesForSprite); } 11: } 12: 13: public override string CacheKey 14: { 15: get { return "CssSprite_" + Global.DistributionNumber; } 16: } 17: 18: public override System.Web.Caching.CacheItemPriority Priority 19: { 20: get { return System.Web.Caching.CacheItemPriority.Default; } 21: } 22: 23: public override TimeSpan CacheDuration 24: { 25: get { return new TimeSpan(1, 0, 0, 0); } 26: } 27: } 1: 'VB 2: Public Class CssSpritCache 3: Inherits CacheBase(Of SiteImageCollection) 4: Public Sub New() 5: End Sub 6: 7: Public Overloads Overrides ReadOnly Property Method() As Func(Of SiteImageCollection) 8: Get 9: Return New Func(Of SiteImageCollection)(Logic.CssSprite.GetImagesForSprite) 10: End Get 11: End Property 12: 13: Public Overloads Overrides ReadOnly Property CacheKey() As String 14: Get 15: Return "CssSprite_" + [Global].DistributionNumber 16: End Get 17: End Property 18: 19: Public Overloads Overrides ReadOnly Property Priority() As System.Web.Caching.CacheItemPriority 20: Get 21: Return System.Web.Caching.CacheItemPriority.[Default] 22: End Get 23: End Property 24: 25: Public Overloads Overrides ReadOnly Property CacheDuration() As TimeSpan 26: Get 27: Return New TimeSpan(1, 0, 0, 0) 28: End Get 29: End Property 30: End Class Back in the CSS logic handler, you need to change the CombineCSS method to also do the replacement for the CSS Sprite. 1: //C# 2: public static string CombineCSS() 3: { 4: string allCSS = string.Empty; 5: 6: foreach (FileInfo fi in Logic.Files.GetFiles("~/Content/CSS/", "css")) 7: { 8: using (StreamReader sr = new StreamReader(fi.FullName)) 9: allCSS += sr.ReadToEnd(); 10: } 11: 12: allCSS = CssSprite.CleanCSS(allCSS); 13: 14: allCSS = Compress(allCSS); 15: 16: return allCSS; 17: } 1: 'VB 2: Public Shared Function CombineCSS() As String 3: Dim allCSS As String = String.Empty 4: 5: For Each fi As FileInfo In Logic.Files.GetFiles("~/Content/CSS/", "css") 6: Using sr As New StreamReader(fi.FullName) 7: allCSS += sr.ReadToEnd() 8: End Using 9: Next 10: 11: allCSS = CssSprite.CleanCSS(allCSS) 12: 13: allCSS = Compress(allCSS) 14: 15: Return allCSS 16: End Function Are you done? You're sort of done. I've only given you the loose bits of the solution. You will have to display the CSS Sprite onto and image. I didn't show you my code for the actual CSS Sprite MVC Controller Action/Generic Handler. This is needed for you to use the CSS Sprite, after all. 1: //C# 2: 3: //--------MVC-------- 4: [ControllerAction] 5: public void Sprite(string id) 6: { 7: Response.ContentType = "image/png"; 8: 9: Bitmap bmp = Logic.CssSprite.GetCachedSprite(); 10: 11: MemoryStream stream = new MemoryStream(); 12: bmp.Save(stream, System.Drawing.Imaging.ImageFormat.Png); 13: 14: Response.BinaryWrite(stream.ToArray()); 15: 16: if(id != null) 17: Response.Cache.SetExpires(DateTime.Now.AddYears(3)); 18: } 19: 20: //-------WebForms----- 21: public void ProcessRequest(HttpContext context) 22: { 23: context.Response.ContentType = "image/png"; 24: 25: Bitmap bmp = Logic.CssSprite.GetCachedSprite(); 26: 27: MemoryStream stream = new MemoryStream(); 28: bmp.Save(stream, System.Drawing.Imaging.ImageFormat.Png); 29: 30: context.Response.BinaryWrite(stream.ToArray()); 31: 32: if (context.Request.QueryString["id"] != null) 33: context.Response.Cache.SetExpires(DateTime.Now.AddYears(3)); 34: } 1: 'VB 2: '--------MVC-------- 3: <ControllerAction()> _ 4: Public Sub Sprite(ByVal id As String) 5: Response.ContentType = "image/png" 6: 7: Dim bmp As Bitmap = Logic.CssSprite.GetCachedSprite() 8: 9: Dim stream As New MemoryStream() 10: bmp.Save(stream, System.Drawing.Imaging.ImageFormat.Png) 11: 12: Response.BinaryWrite(stream.ToArray()) 13: 14: If Not id Is Nothing Then 15: Response.Cache.SetExpires(DateTime.Now.AddYears(3)) 16: End If 17: End Sub 18: 19: '-------WebForms----- 20: Public Sub ProcessRequest(ByVal context As HttpContext) 21: context.Response.ContentType = "image/png" 22: 23: Dim bmp As Bitmap = Logic.CssSprite.GetCachedSprite() 24: 25: Dim stream As New MemoryStream() 26: bmp.Save(stream, System.Drawing.Imaging.ImageFormat.Png) 27: 28: context.Response.BinaryWrite(stream.ToArray()) 29: 30: If Not context.Request.QueryString("id") Is Nothing Then 31: context.Response.Cache.SetExpires(DateTime.Now.AddYears(3)) 32: End If 33: End Sub OK, NOW you're done There you have it. A nice, clean CSS Sprite generated for you. This should save you some time in the long run and will certainly help your users avoid waiting too long for your website to load. If, say, you were to put 10 images into the sprite, for instance, you would save your users 9 requests from the server. That is HUGE for performance. And if your user has a primed cache, it doesn't need to have any request to the server until your distro number changes. So in the end, we all win with CSS sprites. You get them generated for you at runtime, your users don't have to wait, and your website will be leaner and meaner!
http://weblogs.asp.net/zowens/archive/2008/03
CC-MAIN-2014-49
refinedweb
4,026
55.54
Hi Due to impressive work by Ronan Lamy, we're now able to split RPython and PyPy. Note that the fact of splitting this is not up to discussion, however, how we go about it is. During discussions with Armin we came up with the following plan: * We make a copy of pypy repo called rpython. it'll still live under pypy team on bitbucket. * We'll rename toplevel package of pypy to rpython, for the rpython part. Since we need to change ALL THE IMPORTS, we can do refactorings of imports now. Proposed changes: * to be moved from pypy to rpython: annotation, translator, rlib * pypy.config must be split somehow, same for tool, bin and for doc * move pypy.rpython namespace to rpython.rtyper * move pypy.rpython.lltypesystem to rpython.lltypesystem, same for ootype * pypy.rpython.memory becomes rpython.gc * pypy.objspace.flow becomes rpython.flowspace * testrunner and dotviewer can become independent packages * _pytest stays with pypy, however for rpython you can use whatever version of py.test you have installed. rpython test suite should also however be runnable under current contents of _pytest * RPython will come with setup.py and be a "normal" python package Cheers, fijal & armin
https://mail.python.org/pipermail/pypy-dev/2012-October/010602.html
CC-MAIN-2018-30
refinedweb
200
78.04
-- | -- Description: Higher-order Events -- -- 'Event's and combinators for them. module Events.Events( Result(..), Event(..), -- The event type. Instance of HasEvent and Monad. HasEvent(..), -- things which can be lifted to an Event never, -- the event which never happens always, -- the event which always happens sync, poll, -- synchronises or polls an event (>>>=), (>>>), -- wraps Events (+>), -- choice between Events choose, -- chooses between many Events. tryEV, -- Replaces an event by one which checks for errors in the -- continuations. computeEvent, -- Allows you to compute the event with an IO action. wrapAbort, -- Allows you to specify pre- and post-registration actions. -- The post-registration action is executed when the pre-registration -- was, and some other event is registered. noWait, -- :: Event a -> Event () -- Execute event asynchronously and immediately return. HasSend(..), -- overloaded send function HasReceive(..), -- overloaded receive function -- functions to send and receive without going via events. sendIO, -- :: HasSend chan => chan a -> a -> IO () receiveIO, -- :: HasReceive chan => chan a -> IO a allowWhile, -- :: Event () -> Event a -> Event a -- Allow one event to happen while waiting for another. Request(..), -- Datatype encapsulating server calls which get a delayed -- response. request, -- :: Request a b -> a -> IO b -- Simple use of Request. doRequest, -- :: Request a b -> a -> IO (Event b,IO ()) -- More complicated use spawnEvent, -- :: Event () -> IO (IO ()) -- spawnEvent syncs on the given event in a thread. -- the returned action should be executed to kill the thread. getAllQueued, -- :: Event a -> IO [a] -- getAllQueued synchronises on the event as much as possible -- without having to wait. -- Functions for monadic events. (Don't use these directly, they -- are only here so GHC can export the inlined versions of them . . .) thenGetEvent, -- :: Event a -> (a -> Event b) -> Event b thenEvent, -- :: Event a -> Event b -> Event b doneEvent, -- :: a -> Event a syncNoWait ) where import Control.Exception import Control.Concurrent import Util.Computation import Events.Toggle import Events.Spawn data Result = Immediate | Awaiting (IO ()) | AwaitingAlways (IO ()) -- ---------------------------------------------------------------------- -- Events and the HasEvent class. -- ---------------------------------------------------------------------- newtype Event a = Event (Toggle -> (IO a -> IO ()) -> IO Result) -- The function inside an Event registers that event for the synchronisation -- associated with this toggle. The three results -- can be interpreted as follows: -- Immediate can occur in two cases. Either -- (1) the event was immediately matched and we performed the provided -- action fun with an action returning an a. -- (2) the event was not immediately matched because someone else had -- already flipped the toggle. -- In both cases, the event is not registered after the function returns. -- Awaiting action means that the event was registered. -- The caller should always ensure that the action is executed after the -- synchronisation has succeeded. -- AwaitingAlways action means that the event must be done after the -- synchronisation whether or not the action succeeds. -- | HasEvent represents those event-like things which can be converted to -- an event. class HasEvent eventType where --- -- converts to an event. toEvent :: eventType a -> Event a instance HasEvent Event where toEvent = id -- ---------------------------------------------------------------------- -- Three trivial events. -- ---------------------------------------------------------------------- -- | The event that never happens never :: Event a never = Event (\ toggle aActSink -> return (Awaiting done)) -- | The event that always happens, immediately always :: IO a -> Event a always aAction = Event ( \ toggle aActSink -> do ifToggle toggle (aActSink aAction) return Immediate ) -- ---------------------------------------------------------------------- -- Continuations -- ---------------------------------------------------------------------- -- | Attach an action to be done after the event occurs. (>>>=) :: Event a -> (a -> IO b) -> Event b (>>>=) (Event registerFn) continuation = Event ( \ toggle bActionSink -> registerFn toggle ( \ aAction -> bActionSink ( do a <- aAction continuation a ) ) ) infixl 2 >>>= -- | Attach an action to be done after the event occurs. (>>>) :: Event a -> IO b -> Event b (>>>) event continuation = event >>>= (const continuation) infixl 2 >>> {-# INLINE (>>>) #-} -- ---------------------------------------------------------------------- -- Choice -- ---------------------------------------------------------------------- -- | Choose between two events. The first one takes priority. (+>) :: Event a -> Event a -> Event a (+>) (Event registerFn1) (Event registerFn2) = Event ( \ toggle aActSink -> do status1 <- registerFn1 toggle aActSink let doSecond postAction1 = do let doThird postAction2 =return (AwaitingAlways ( do postAction1 postAction2 )) status2 <- registerFn2 toggle aActSink case status2 of Immediate -> do postAction1 return Immediate Awaiting postAction2 -> doThird postAction2 AwaitingAlways postAction2 -> doThird postAction2 case status1 of Immediate -> return Immediate Awaiting postAction1 -> doSecond postAction1 AwaitingAlways postAction1 -> doSecond postAction1 ) infixl 1 +> -- | Choose between a list of events. choose :: [Event a] -> Event a choose [] = never choose nonEmpty = foldr1 (+>) nonEmpty -- ---------------------------------------------------------------------- -- Catching Errors -- ---------------------------------------------------------------------- -- | Catch an error if it occurs during an action attached to an event. tryEV :: Event a -> Event (Either SomeException a) tryEV (Event registerFn) = Event ( \ toggle errorOraSink -> registerFn toggle (\ aAct -> errorOraSink (Control.Exception.try aAct) ) ) -- ---------------------------------------------------------------------- -- Allowing an event to vary -- --------------------------------------------------------------------- -- | Construct a new event using an action which is called at each -- synchronisation computeEvent :: IO (Event a) -> Event a computeEvent getEvent = Event ( \ toggle aActSink -> do (Event registerFn) <- getEvent registerFn toggle aActSink ) -- ---------------------------------------------------------------------- -- Getting information about when an event is aborted. -- --------------------------------------------------------------------- -- | When we synchronise on wrapAbort preAction -- preAction is evaluated to yield (event,postAction). -- Then exactly one of the following: -- (1) thr event is satisfied, and postAction is not done. -- (2) some other event in this synchronisation is satisfied -- (so this one isn\'t), and postAction is done. -- (3) no event is satisfied (and so we will deadlock). wrapAbort :: IO (Event a,IO ()) -> Event a wrapAbort preAction = computeEvent ( do postDone <- newSimpleToggle (Event registerFn,postAction) <- preAction let doAfter = ifSimpleToggle postDone postAction return (Event ( \ toggle aActSink -> do status <- registerFn toggle (\ aAct -> do simpleToggle postDone aActSink aAct ) case status of -- Even with Immediate we must do doAfter, as -- the toggle may have been flipped by someone else. Immediate -> (doAfter >> return Immediate) Awaiting action -> return (Awaiting (doAfter >> action)) AwaitingAlways action -> return (AwaitingAlways (doAfter >> action)) )) ) -- ---------------------------------------------------------------------- -- Synchronisation and Polling. -- Sigh. Because GHC makes takeMVar/putMVar interruptible, I don't -- know how to ensure that the postAction will get done if an -- asynchronous exception is raised. -- --------------------------------------------------------------------- -- | Synchronise on an event, waiting on it until it happens, then returning -- the attached value. sync :: Event a -> IO a sync (Event registerFn) = do toggle <- newToggle aActMVar <- newEmptyMVar status <- registerFn toggle (\ aAct -> putMVar aActMVar aAct) aAct <- takeMVar aActMVar case status of AwaitingAlways postAction -> postAction _ -> done aAct -- | Synchronise on an event, but return immediately with Nothing if it -- can\'t be satisfied at once. poll :: Event a -> IO (Maybe a) poll event = sync ( (event >>>= (\ a -> return (Just a))) +> (always (return Nothing)) ) -- ---------------------------------------------------------------------- -- The noWait combinator -- ---------------------------------------------------------------------- -- | Turns an event into one which is always satisfied at once but registers -- the value to be done later. WARNING - only to be used with events without -- actions attached, as any actions will not get done. noWait is typically -- used with send events, where we don\'t want to wait for someone to pick up -- the value. noWait :: Event a -> Event () noWait (Event registerFn) = Event ( \ toggle unitActSink -> do ifToggle toggle ( do toggle' <- newToggle registerFn toggle' (const done) unitActSink (return ()) done ) return Immediate ) -- | Register an event as synchronised but don\'t wait for it to complete. -- WARNING - only to be used with events without -- actions attached, as any actions will not get done. noWait is typically -- used with send events, where we don\'t want to wait for someone to pick up -- the value. -- synchronise on something without waiting syncNoWait :: Event a -> IO () syncNoWait (Event registerFn) = do toggle <- newToggle registerFn toggle (const done) done {-# RULES "syncNoWait" forall event . sync (noWait event) = syncNoWait event "syncNoWait2" forall event continuation . sync ((noWait event) >>>= continuation) = (syncNoWait event >> continuation ()) #-} -- ---------------------------------------------------------------------- -- The HasSend and HasReceive classes -- ---------------------------------------------------------------------- -- | HasSend represents things like channels on which we can send values class HasSend chan where --- -- Returns an event which corresponds to sending something on a channel. -- For a synchronous channel (most channels are synchronous) this event -- is not satisfied until someone accepts the value. send :: chan a -> a -> Event () -- | HasReceive represents things like channels from which we can take values. class HasReceive chan where --- -- Returns an event which corresponds to something arriving on a channel. receive :: chan a -> Event a -- Two handy abbreviations -- | Send a value along a channel (as an IO action) sendIO :: HasSend chan => chan a -> a -> IO () sendIO chan msg = sync (send chan msg) -- | Get a value from a channel (as an IO action) receiveIO :: HasReceive chan => chan a -> IO a receiveIO chan = sync (receive chan) -- ---------------------------------------------------------------------- -- Monadic Events -- We include some extra GHC magic here, so that using "always" -- in monadic events is not especially inefficient. -- ---------------------------------------------------------------------- instance Monad Event where (>>=) = thenGetEvent (>>) = thenEvent return = doneEvent fail str = always (ioError (userError str)) thenGetEvent :: Event a -> (a -> Event b) -> Event b thenGetEvent event1 getEvent2 = event1 >>>= (\ val -> sync(getEvent2 val)) thenEvent :: Event a -> Event b -> Event b thenEvent event1 event2 = event1 >>> (sync(event2)) doneEvent :: a -> Event a doneEvent val = always (return val) {-# INLINE thenGetEvent #-} {-# INLINE thenEvent #-} {-# INLINE doneEvent #-} -- Rules allowing us to use "always" in monadic events efficiently. {-# RULES "always1" forall action . sync (always action) = action "always" forall action continuation . (>>>=) (always action) continuation = always (action >>= continuation) #-} -- ---------------------------------------------------------------------- -- Other miscellaneous event functions. -- ---------------------------------------------------------------------- -- | allowWhile event1 event2 waits for event2, while handling event1. allowWhile :: Event () -> Event a -> Event a allowWhile event1 event2 = event2 +>(do event1 allowWhile event1 event2 ) data Request a b = Request (a -> IO (Event b,IO ())) -- A Request operation represents a call to a server to evaluate -- a function :: a->b. The Event b is activated with the result. -- The client should call the supplied action if the event is -- no longer needed. request :: Request a b -> a -> IO b request rq a = do (event,_) <- doRequest rq a sync event doRequest :: Request a b -> a -> IO (Event b,IO ()) doRequest (Request rqFn) request = rqFn request -- | Synchronise on an event in a different thread. -- The kill action it returns is unsafe since it can cause deadlocks if -- it occurs at an awkward moment. To avoid this use spawnEvent, if possible. spawnEvent :: Event () -> IO (IO ()) spawnEvent reactor = spawn (sync reactor) -- | get all we can get from the event without waiting. getAllQueued :: Event a -> IO [a] getAllQueued event = gAQ event [] where gAQ event acc = do maybeA <- poll event case maybeA of Nothing -> return (reverse acc) Just a -> gAQ event (a:acc)
http://hackage.haskell.org/package/uni-events-2.2.1.0/docs/src/Events-Events.html
CC-MAIN-2015-14
refinedweb
1,590
51.48
Consuming a WebService From Mono A web service makes it easy to access data and services offered by other people or running on a different machine. For example, Google provides a web service that allows your application to access many of the services it provides. One capability provided by Google is the spell checker they use when you enter a search term. By consuming the web service, you can add a spell check function to your own application. First Steps: Creating the Local Stub To create the local library that will invoke Google's web service, we'll use their WSDL file. You can get the Google WSDL files at. To generate the C# stub source type: wsdl This will create a file called GoogleSearchService.cs, which contains the behind the scenes code needed to access the web service. Compile this file with: mcs /target:library GoogleSearchService.cs -r:System.Web.Services The above command line instructs the compiler to generate a library, and to reference (-r:) the System.Web.Services library. Now you have the final stub assembly: GoogleSearchService.dll Google License key Google locks the web service with a special key to prevent misuse. You can obtain a key at. This key allows ~50 actions per day. You must replace the sample key with yours. Example 1: A Console Application Now that you have an assembly for the web service, you can use it just like any other assembly. The Code: using System; class SpellChecker { public static void Main (string [] args) { // Your license key, args[0]); // Display the suggestion, if any if (suggestion == null) Console.WriteLine("[No suggestion]"); else Console.WriteLine(suggestion); } } Compile it like this: mcs /r:GoogleSearchService.dll spellchecker.cs Run it like this: mono spellchecker.exe pupy And it should print out the corrected word: puppy Example 2: A Webpage You can also call a web service from a web page. First you will have to drop the stub GoogleSearchService.dll in your ASP.Net bin directory. The example is very simple to understand. <%@ Page void Page_Load (object sender, EventArgs e) { // Put your license key here, TextBox1.Text); // Display the suggestion, if any if (suggestion == null) Label1. <p><asp:TextBox</asp:TextBox></p> <p><asp:Label</asp:Label></p> <p><input type="submit" value="Submit" runat="server"></input></p> </form> </body> </html> This will look like this: [Image:google.png] The code where the web service is accessed is marked red. To test the service enter "spelll checker" for example, and watch it correct your spelling. Documentation If you want to do more, you will require documentation of the web service. For Google's API, one place to look is in the comments of the WSDL file. Individual web service providers are responsible for providing good documentation, though you may be able to use the C# introspection tools or read the source generated by wsdl for a slightly better idea of how any given web service works. Contributors Johannes Roith, Shane Landrum
http://www.mono-project.com/Consuming_a_WebService
crawl-001
refinedweb
497
66.33