text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I am new to python... can anyone help me to use function how to clear python Shell... like using function system("cls") or clrscr().... I'm on a GNU/Linux system. import os os.system('clear') is one way to do it here, but let's face it - that's way too much typing just to clear the screen on occasion, so I just press Ctrl+L. This works fine when running python interactively, using ipython, and in many other applications that can be run from within a *NIX terminal emulator. It probably won't work with IDLE, since that's a Tk app. Hell I just use it for quickie testing of something, so I ctrl-d and then usually close that terminal window That is a bit much typing for my taste If you are using python IDE in the windows os.system('CLS') this code might help you to achieve the task.AS the linux is concerned the import os os.system('clear') can be used.
https://www.sitepoint.com/community/t/clearing-python-shell/21818
CC-MAIN-2017-09
refinedweb
169
75.4
SCDJWS Study Guide: XML Schema Printer-friendly version | Mail this to a friend Introduction to XML Schema A XML schema describes an XML markup language. Specifically it defines which elements and attributes are used in a markup language, how they are ordered and nested, and what their data types are. The XML specification includes the Document Type Definition (DTD), which can be used to describe XML markup languages and to validate XML documents. While DTDs have proven very useful over the years, they are limited. The W3C created a new way to describe markup languages called XML schema. XML schema is an XML based alternative to DTD. An XML schema not only describes the structure of an XML document, but also address data typing. What is an XML Schema? An XML schema is a predefined set of elements/attributes/values for defining "types", these are the the legal building blocks of an XML document. A XML schema does the following: - define elements that can appear in a document - define attributes that can appear in a document - define which elements are child elements - defines the sequence in which the child elements can appear - defines the number of child elements - defines whether an element is empty or can include text - define default values for attributes XML schema is an alternative and update to DTD. The main objectives of XML schema include: - XML format (and therefore accessibility to XML parsing tools) - extensibility, i.e., by allowing other schemas to be imported - finer control over data typing - support for XML namespaces The following is an example of a comment element, comment.xsd: <xsd:schema xmlns: <xsd:element </xsd:schema> The following is an example of an instance, comment.xml: <x xmlns=""> <comment>This is a great comment!</comment> </x> An XML schema document can play the same role for an XML document as an external DTD. It is included into a document to be validated using the XML schema model. To reference an XML schema document from an XML document, add two XML Infoset attributes to the document element of the XML document: - The first attribute declares the namespace for XMLSchema-instance. The prefix is normally xsi. The value of the attribute is fixed. - The second attribute specifies the location of the XML schema document. In the simplest case, there is no namespace associated with this location. Therefore the value of the attribute is the file location relative to the current XML file. At the start of your schema you need to place a few lines of code, known as a Prolog, which defines the markup that follows as a schema, and also defines what type of schema it is. The prolog comes immediately after the XML declaration, which is the first line of code (after any comments). The Xerces project provides an example of an XML document that refers to an XML schema document. The document element of the XML document is personnel: <?xml version="1.0" encoding="UTF-8"?> <personal xmlns:xsi="" xsi: ... </personnel> What is in an XML Schema file? The contents of an XML schema file are fully described in a DTD, which is an appendix of the first of two parts of the XML schema standard,. Like any XML document, an XML schema document contains one document element. Its name is <schema>. Like a DTD, a schema can contain many different kinds of statements, but the most common ones are: - element declarations (similar to <!ELEMENT ...> declarations in a DTD) - type definitions (akin to parameter entities in a DTD) The attribute list declarations, which are a common part of DTDs, can be in one of two places in an XML schema document: embedded within an element declaration, or as a standalone group, which will be referenced within one or more element declarations. �XML schemas are the Successors of DTDs A significant difference between schemas and DTDs is that schemas define many basic data types: string, boolean, float, double, decimal, timeDuration, recurringDuration, binary, and uri. Each of these so-called primitive data types has distinctive lexical representation and other characteristics which delimit their possible values. By typing the data enclosed in an XML document, a schema makes the document computable in ways not possible with the simple (mostly string-based) data types that are present in DTDs. XML schema was originally proposed by Microsoft, but is now a W3C recommendation. We think that very soon XML schemas will be used in Web applications as a replacement for DTDs. Here are the reasons why: - XML schemas are easier to learn than DTD - XML schemas are extensible to future additions - XML schemas are richer and more useful than DTDs - XML schemas are written in XML - XML schemas support data types - XML schemas support namespaces XML Schema Instance Document An XML schema instance document is an XML document that conforms to a particular schema. Neither instances nor schemas need to exist as documents. They could be any of: - Streams of bytes sent between applications - Fields in a database record - Collections of XML Infoset "Information Items" Schema Elements and Subelements - Each schema has a schema element and a variety of subelements - Subelements determine appearance of elements and their content in instance documents - Types of subelements are element, complexType, and simpleType - Elements begin with xsd: to associate them with XML schema namespace through declaration. <xsd:schema xmlns: where the xsd prefix identifies elements as part of the XML schema language Resources--XML Schema specification:
http://xyzws.com/scdjws/SGS13/1
CC-MAIN-2018-39
refinedweb
908
50.06
I am making a C++ program that will calculate how many cannonballs there are in a cannonball pyramid, or tetrahedron. (4 sided pyramid, not your typical 5 sided one), depending on how many cannonballs are on one side of the bottom layer or how many layers there are in the tetrahedron. So lets say there are 10 cannonballs on one side of the bottom layer, making this tetrahedron 10 layers tall. There would be 55 cannonballs total on the bottom layer (10+9+8+7+6+5+4+3+2+1=55) The next layer would have 45 (55-10=45) and the next 36 (45-9=36) and so on... 55+45+36+28+21+15+10+6+3+1 = 220, so there would be 220 cannonballs total in this tetrahedron. So I know what I want my program to do, I just don't know how to do it. The program right now will take 10 layers, or 10 cannonballs on a side, and turn it into 55 using recursion (I can use recursion or a nested loop). However, I have no idea what I am going to do to get it to find out what the next layer needs to be, then add it together eventually getting it to add all layers and getting the sum, which is 220. The pyramidChart() function is just a reference so you can see how many cannonballs are in a tetrahedron 1-10 layers high. Any help is appreciated! Thanks!Any help is appreciated! Thanks!Code:#include <iostream> using namespace std; int cannonBalls(int side); int main() { cout << "The first layer has "<< cannonBalls(10) << " cannonballs."<< endl; return 0; } int cannonBalls(int side) { cout << "The number of cannonballs is " << side << endl; //if base case return 1 if(side==1) { return 1; } else { //otherwise return side + fn(s-1) return side + cannonBalls(side - 1); } } int pyramidChart() { int n = 0; int cannonBalls = 0; cin>>n; switch (n) { case 1: cannonBalls = 1; break; case 2: cannonBalls = 3+1; break; case 3: cannonBalls = 6+3+1; break; case 4: cannonBalls = 10+6+3+1; break; case 5: cannonBalls = 15+10+6+3+1; break; case 6: cannonBalls = 21+15+10+6+3+1; break; case 7: cannonBalls = 28+21+15+10+6+3+1; break; case 8: cannonBalls = 36+28+21+15+10+6+3+1; break; case 9: cannonBalls = 45+36+28+21+15+10+6+3+1; break; case 10: cannonBalls = 55+45+36+28+21+15+10+6+3+1; break; default: cannonBalls = 0; } return cannonBalls; }
https://cboard.cprogramming.com/cplusplus-programming/111081-cplusplus-cannonball-pyramid-calculator-tetrahedron.html
CC-MAIN-2017-39
refinedweb
423
61.8
A friendly place for programming greenhorns! Forums Java » Game Development Refactoring Post by: Scotty Young , Greenhorn Dec 06, 2011 16:36:41 I'm trying to develop a simple java version of pacman. Im wondering if anyone can take a look and give me some advice on splitting it up into proper OO classes. I'm not sure where to start. I've attached my classes so far... Main JFrame class - import javax.swing.JFrame; import javax.swing.JComponent; import javax.swing.JPanel; import java.awt.Color; import java.awt.Dimension; import java.awt.event.*; class TuPacMan extends JFrame{ public TuPacMan(){ this.add(new PacMan()); pack(); this.setVisible(true); this.getContentPane().setBackground(Color.BLACK); this.setDefaultCloseOperation(EXIT_ON_CLOSE); } public static void main(String [] args){ new TuPacMan(); } } The pacman/paint class: import java.awt.Image; import java.awt.Graphics; import java.awt.Graphics2D; import javax.swing.JComponent; import java.awt.Color; import java.awt.event.KeyEvent; import java.awt.event.*; import java.awt.Dimension; class PacMan extends JComponent implements Runnable, KeyListener{ public static int height = 400; public static int width = 400; public boolean left, right, up, down; private int PacManX, PacManY; public PacMan(){ this.setPreferredSize(new Dimension(height,width)); addKeyListener(this); this.setFocusable(true); setDoubleBuffered(true); new Thread(this).start(); } public void paintComponent(Graphics g) { Image image = createImage(height, width); Graphics g2 = image.getGraphics(); g2.setColor(Color.BLACK); g2.fillRect(0,0,height,width); g2.setColor(Color.YELLOW); g2.fillRect(PacManX,PacManY,20,20); g2.dispose(); g.drawImage(image, 0, 0, null); } public void run() { while(true) { repaint(); move(); try { Thread.sleep(20); } catch(Exception e) { } } } public void move() { if (up && PacManY>=0 && PacManY<=380) { PacManY=PacManY-2; } else if (down && PacManY>=0 && PacManY<=380) { PacManY=PacManY+2; } else if (left && PacManX>=0 && PacManX<=380) { PacManX=PacManX-2; } else if (right && PacManX>=0 && PacManX<=380) { PacManX=PacManX+2; } if (PacManX<0) { PacManX=0; } if (PacManX>380) { PacManX=380; } if (PacManY<0) { PacManY=0; } if (PacManY>380){ PacManY=380; } } public void keyPressed(KeyEvent e){ int key = e.getKeyCode(); if (key == KeyEvent.VK_LEFT) { up=false; down=false; right=false; left=true; } else if (key == KeyEvent.VK_RIGHT){ up=false; down=false; right=true; left=false; } else if (key == KeyEvent.VK_UP){ up=true; down=false; right=false; left=false; } else if (key == KeyEvent.VK_DOWN){ up=false; down=true; right=false; left=false; } } public void keyReleased(KeyEvent e){ // } public void keyTyped(KeyEvent e){ // } } Post by: Stephan van Hulst , Saloon Keeper Dec 06, 2011 18:42:01 Hi Scotty. A couple of important points: Don't let your main class extend JFrame and you especially shouldn't let your PacMan class implement Runnable and any listeners. Always prefer composition over inheritance. This means the frame should be a part of your main class, your main class shouldn't be a frame. Your PacMan class should have something that runs the game and listens for events, it shouldn't take that responsibility itself. All it should be responsible for is drawing. Use according names too. Pacman sounds like it would be part of the game model. The graphical component should be called PacmanPanel or something. When you call methods that fiddle with Swing components, make sure they are called on the Event Dispatch Thread . Your TuPacMan class is currently being constructed on the main thread. Make all your fields private. *Always*. Seriously. There are moments where it can be beneficial to make fields package private or protected, but if you don't know for sure if such a moment has presented itself, it probably didn't. So just make your fields private. Do not start new threads in constructors. Threads should be started by calling a method after the object has been constructed. *Never* pass a "this" reference from the constructor. This is very dangerous. Try to keep members as private as possible. paintComponent() was protected, and should remain so. Why are you drawing to an image, and then copying the image to the component? Just draw to the component directly. I also think Swing is double buffered by default. No need to call the method. Use the @Override annotation when you're overriding methods, such as paintComponent(). Don't use Thread.sleep(). you should use a Swing timer instead. When implementing listeners, use adapters. KeyAdapter is a great starting point if you want to implement a KeyListener. The same for WindowListener/WindowAdapter and MouseListener/MouseAdapter. Here's an example demonstrating some of these concepts: import javax.swing.*; final class PacmanForm { PacmanForm() { JFrame frame = new JFrame("My Pacman"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); PacmanPanel display = new PacmanPanel(); frame.add(display); frame.setSize(400,400); frame.setVisible(true); display.start(); } public static void main(String... args) { SwingUtilities.invokeLater(new Runnable() { public void run() { new PacmanForm(); } }); } } import java.awt.*; import java.awt.event.*; import javax.swing.*; final class PacmanPanel extends JPanel { private final Timer timer; private long lastUpdate; private Maze maze; PacmanPanel() { setFocusable(true); setOpaque(true); maze = new Maze(); timer = new Timer(20, new ActionListener() { public void actionPerformed(ActionEvent e) { updateScene(); } }); addKeyListener(new KeyHandler(maze.getPacman())); } void start() { lastUpdate = System.currentTimeMillis(); timer.start(); } void pause() { timer.stop(); } private void updateScene() { long now = System.currentTimeMillis(); long lapse = now - lastUpdate; lastUpdate = now; maze.update(lapse); repaint(); } @Override protected void paintComponent(Graphics g) { Rectangle clip = g.getClipBounds(); g.setColor(Color.BLACK); g.fillRect(clip.x, clip.y, clip.width, clip.height); // draw stuff from the maze } } import java.awt.event.*; final class KeyHandler extends KeyAdapter { private final Pacman pacman; KeyHandler(Pacman pacman) { if (pacman == null) throw new NullPointerException(); this.pacman = pacman; } @Override public void keyPressed(KeyEvent e) { switch (e.getKeyCode()) { case KeyEvent.VK_LEFT : pacman.doLeft (true); break; case KeyEvent.VK_RIGHT: pacman.doRight(true); break; case KeyEvent.VK_UP : pacman.doUp (true); break; case KeyEvent.VK_DOWN : pacman.doDown (true); break; } } @Override public void keyReleased(KeyEvent e) { switch (e.getKeyCode()) { case KeyEvent.VK_LEFT : pacman.doLeft (false); break; case KeyEvent.VK_RIGHT: pacman.doRight(false); break; case KeyEvent.VK_UP : pacman.doUp (false); break; case KeyEvent.VK_DOWN : pacman.doDown (false); break; } } } Post by: Scotty Young , Greenhorn Dec 07, 2011 04:48:34 That's a great reply, thank you. I'm going to take all that in consideration and go back and change the code. The reason I was drawing to an image first is becuase I encountered problems with flickering when drawing to the screen. That was my attempt to double buffer to remove the flickering. Post by: Przemek Boryka , Ranch Hand Dec 08, 2011 14:11:53 Some time ago, I made game similar to PacMan, below is source code of main class, where you can find methods that are responsible for: - initial settings game - game update //gameUpdate() (states of the game, collision detection) - game render//gameRender() (render menu, render game-play, - input actions (keyboard actions) - mouse actions and a special method "initFullScreen" which provide all actions to show game in selected resolution in fullscreen. I did not say that this is the best solution, that you can use in your game, but it may be helpful to start. public class FsJarMan { public static void main(String[] args) { FsJarManFrame fsJarManFrame = new FsJarManFrame(FrameRate.fps(45)); fsJarManFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); fsJarManFrame.setVisible(true); } } class FsJarManFrame extends JFrame implements Runnable { /** * */ private static int FWIDTH = 800; // 800 x 600 screen resolution private static int FHEIGHT = 600; private static final int BUFFERS_NUMBER = 2; // double buffer private Thread gameThread = null; //main thread private boolean gameRun = false; private boolean gamePause = false; private final long peroid; // -- below action listeners (mouse, motion, key) private MouseActions mouseActions = new MouseActions(); private MouseMotionActions mouseMotionActions = new MouseMotionActions(); private KeyActions keyActions = new KeyActions(); // --------------------------------------------------- private GraphicsDevice gd; private Graphics2D gScr; private BufferStrategy bufferStrategy; private Cursor transparentCursor, normalCursor; public FsJarManFrame(long peroid) { super("JarMan v. 1.0"); this.peroid = peroid; transparentCursor = createHiddenCursor(); normalCursor = this.getCursor(); initFullScreen(); addMouseListener(mouseActions); addMouseMotionListener(mouseMotionActions); addKeyListener(keyActions); // --------- Inits ------ Images.init(); // images & sounds init SoundFx.init(); // --------- End Inits ------ // ---------- GAME OBJECTS -------------- ... game object like, main character, gosts obstacles // ---------- END GAME OBJECTS ---------- SoundFx.MENU_BACKGROUND_SOUND.playLoop(); Game.setState(States.GAME_MENU); gameStart(); // from here we start game } private void gameStart() { if (gameThread == null || !gameRun) { gameThread = new Thread(this); gameThread.start(); } } public void run() { // main loop, works until gameRun is false gameRun = true; gamePause = false; long t1, t2, sleep; t1 = System.currentTimeMillis(); while(gameRun) { if (!gamePause) { gameUpdate(); } screenUpdate(); t2 = System.currentTimeMillis() - t1; sleep = peroid - t2; if (sleep <= 0) { sleep = 5; } try { Thread.sleep(sleep); } catch (InterruptedException e) {} t1 = System.currentTimeMillis(); } System.exit(0); } private Cursor createHiddenCursor() { int[] pixels = new int[16 * 16]; Image image = Toolkit.getDefaultToolkit().createImage( new MemoryImageSource(16, 16, pixels, 0, 16)); Cursor transparentCursor = Toolkit.getDefaultToolkit().createCustomCursor (image, new Point(0, 0), "invisibleCursor"); return transparentCursor; } private void prepareGame() { //… setup game play … start conditions ... } // ---------------- GAME UPDATE ---------------- + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + private void gameUpdate() { switch(Game.getState()) { case GAME_MENU: //... some action ... break; case GAME_PREPARE: //... some action ... break; //your game states } } // ---------------- END GAME UPDATE -------------- + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + // ---------------- GAME RENDER ------------------ + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + private void gameRender() { gScr.setColor(Color.black); gScr.fillRect(0, 0, FWIDTH, FHEIGHT); gScr.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); // antialiasing - game looks much better switch(Game.getState()) { case GAME_MENU: //... here we can draw menu ... break; case GAME_PREPARE: break; case GAME_PLAY: //.... Here we draw the world, the main character and ghosts .... world.draw(gScr); jarMan.draw(gScr); for (Ghost g : ghosts) g.draw(gScr); drawScores(gScr); break; } // ---------------- END GAME RENDER -------------- private void screenUpdate() { try { gScr = (Graphics2D) bufferStrategy.getDrawGraphics(); gameRender(); gScr.dispose(); if (!bufferStrategy.contentsLost()) bufferStrategy.show(); else System.out.println("Content lost"); } catch(Exception e){ System.out.println(e.getMessage()); gameRun = false; } } // --------------------- INPUT ACTIONS DOWN ----------------------- private class MouseMotionActions extends MouseAdapter { //… your mouse motion actions… } private class MouseActions extends MouseAdapter { //… your mouse actions … } private class KeyActions extends KeyAdapter { //… your key actions … } // ----------------------- END ACTIONS -------------------------- // --------------------- DOWN FULL SCREEN METHODS --------------- private void initFullScreen() { GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment(); gd = ge.getDefaultScreenDevice(); setResizable(false); setUndecorated(true); setIgnoreRepaint(true); if (!gd.isFullScreenSupported()) { System.out.println("Full Screen not supported!!"); System.exit(0); } gd.setFullScreenWindow(this); setDisplayMode(800, 600, 32); FWIDTH = getBounds().width; FHEIGHT = getBounds().height; setBufferStrategy(); } private void setBufferStrategy() { try { EventQueue.invokeAndWait(new Runnable() { public void run() { createBufferStrategy(BUFFERS_NUMBER); } }); } catch (Exception e) { System.out.println(e.getMessage()); } try { Thread.sleep(500); } catch(InterruptedException e) {} bufferStrategy = getBufferStrategy(); } private void setDisplayMode(int w, int h, int b) { if (!isDisplayModeAvailable(w, h, b)) { return; } DisplayMode dm = new DisplayMode(w, h, b, DisplayMode.REFRESH_RATE_UNKNOWN); try { gd.setDisplayMode(dm); } catch (IllegalArgumentException e) { System.out.println(e.getMessage()); } try { Thread.sleep(1000); } catch(InterruptedException e) {} } private boolean isDisplayModeAvailable(int w, int h, int b) { DisplayMode[] modes = gd.getDisplayModes(); for (int i = 0; i < modes.length; i++) { if (w == modes[i].getWidth() && h == modes[i].getHeight() && b == modes[i].getBitDepth()) return true; } return false; } } Post by: Randall Twede , Ranch Hand Dec 08, 2011 17:55:53 Stephan gave some great advice, and i don't mean to step on toes, but the idea of not extending or implementing is his opinion. not everyone agrees. Post by: Stephan van Hulst , Saloon Keeper Dec 09, 2011 09:42:12 Yes, of course. I tend to present my points as if they are hard rules, but I hope everyone realizes they are merely (strong) suggestions, heavily influenced by my personal coding style. [edit] To expand on the extend/implement discussion, the reason I'm discouraging it *in this particular case* is because main classes should never extend some sort of graphical component. It means your main program is forever destined to be a JFrame, even if later on something better might come along, or you would just like to make an ASCII version of your game. The display also shouldn't implement Runnable, because the display simply isn't something that should be runnable. It displays something. Conceptually it can't be run. It doesn't model a task. Graphical components also shouldn't implement listeners. This is the most subjective point. Many people disagree. However, if you let them implement a listener, it means some other code could take your graphical component, and register it as a listener for events to *other* graphical components. That hardly seems like it makes any sense. The problem is that many programmers make choices because those choices work. The computer doesn't care. There are a lot of conceptual matters they fail to consider though. Whenever you find yourself using the extends/implements keywords, you should first ask yourself: Is my class *really* something I can consider to be of type X? For instance, Fruit has a color. So if I want to create a Car class that also has a colour, should I let it extend Fruit? The compiler won't complain, and the program will run perfectly. However, conceptually it makes no sense at all. The same is true with JPanel and ActionListener for instance. A graphical component is *not* a listener. It may work, but it doesn't make sense. The difference seems harder to see than with the obvious example of a Fruit and a Car, but (in my opinion) the issue is the same. Post by: Stephan van Hulst , Saloon Keeper Dec 09, 2011 10:31:34 Actually, I made a mistake in my design above. I made the display responsible for timing the updates to the model. The model (Maze in this case) should handle the timing itself, and notify the display when it has updated. Post by: Randall Twede , Ranch Hand Dec 09, 2011 11:04:38 i should have read the code(or at least part of it) before commenting. this is the first time i have seen a class implement runnable then create a thread to call its own run method. i still fail to see the problem with having a JPanel implement mouseListener or mouseMotionListener for example. i have a painting program that implements both to draw shapes. just my opinion though. Post by: Stephan van Hulst , Saloon Keeper Dec 09, 2011 11:31:53 Alright. So what happens when I take your panel, and register it as a listener for my JButton? Does it start performing actions on itself? That seems weird. Another problem is that it gives one class too much responsibility. You're not separating concerns. It decreases cohesion, and makes your classes harder to maintain. I also assume that your graphical component is public. If you implement ActionListener, it will also make this part of the class public. This is completely unnecessary. No class other than the component itself cares about events. So having an anonymous or package private class would be superior. Post by: Stephan van Hulst , Saloon Keeper Dec 09, 2011 11:46:55 Let's take a look at Przemek's code. While I'm quite sure it works as intended (assuming some omitted code was included); it's a big slab of code that takes bites it cannot chew; with respect to Przemek. The very first thing I would do to refactor that code was create 4 new source files and separate some of that code over some new classes. Post by: Randall Twede , Ranch Hand Dec 09, 2011 13:20:04 i am beginning to see your point. the JPanel and the JFrame that contains it are public. the JFrame handles Item events and Action events. all the event handling code is public also(probably because the are declared that way in the two super classes). i have started a thread more or less about this here so i will just let you help this guy. Post by: Przemek Boryka , Ranch Hand Dec 10, 2011 06:59:32 The game runs pretty well, although I had problems with the correct implementation of (my) collision detection algorithm, it may result from the fact that I did not use with any suggestions, books. Below is a url to picture of the structure of source files in the game. Source files structure Here is url to video of game play JarMan game play video (low quality) I do not know why, but the sound lags behind the image :/, in game everything is fine! JarManPane is a JPanel (window version) version of the same game! One of most important is World.java class. In this class are made: - collision detection JarMan(main character) - walls - JarMan - ghosts - ghosts - walls - JarMan - obstacles (yellow, blue, white, ghost icon) - creating a virtual world (paths, walls, obstacles) There is no implemented any intelligent algorithm of ghosts movement. I had no time and desire ;). Sorry for my english ! I am still learning Post by: Stephan van Hulst , Saloon Keeper Dec 10, 2011 09:58:28 That's quite impressive. The game looks really smooth, and beautiful too. Post by: autobot Wink, wink, nudge, nudge, say no more ... All times above are in ranch (not your local) time. The current ranch time is Nov 17, 2017 20:11:01 . FAQs Search Recent Topics Flagged Topics Zero Replies Best Topics Hot Topics | advertise | Desktop view paul wheaton
https://coderanch.com/t/560983/java/Refactoring?nonMobile=false
CC-MAIN-2017-47
refinedweb
2,779
51.34
“ Introduction to NumPy¶ The essential problem that NumPy solves is fast array processing. For example, suppose we want to create an array of 1 million random draws from a uniform distribution and compute the mean. If we did this in pure Python it would be orders of magnitude slower than C or Fortran. This is because - Loops in Python over Python data types like lists carry significant overhead. - C and Fortran code contains a lot of type information that can be used for optimization. - Various optimizations can be carried out during compilation when the compiler sees the instructions as a whole. However, for a task like the one described above, there’s no need to switch back to C or Fortran. Instead, we can use NumPy, where the instructions look like this: import numpy as np x = np.random.uniform(0, 1, size=1000000) x.mean() 0.4999941914418898 The operations of creating the array and computing its mean are both passed out to carefully optimized machine code compiled from C. More generally, NumPy sends operations in batches to optimized C and Fortran code. This is similar in spirit to Matlab, which provides an interface to fast Fortran routines. A Comment on Vectorization¶ NumPy is great for operations that are naturally vectorized. Vectorized operations are precompiled routines that can be sent in batches, like - matrix multiplication and other linear algebra routines - generating a vector of random numbers - applying a fixed transformation (e.g., sine or cosine) to an entire array In a later lecture, we’ll discuss code that isn’t easy to vectorize and how such routines can also be optimized. NumPy Arrays¶ The most important thing that NumPy defines is an array data type formally called a numpy.ndarray. NumPy arrays power a large proportion of the scientific Python ecosystem. To create a NumPy array containing only zeros we use np.zeros a = np.zeros(3) a array([0., 0., 0.]) type(a) numpy.ndarray NumPy arrays are somewhat like native Python lists, except that - Data must be homogeneous (all elements of the same type). - These types must be one of the data types ( dtypes) provided by NumPy. The most important of these dtypes are: - float64: 64 bit floating-point number - int64: 64 bit integer - bool: 8 bit True or False There are also dtypes to represent complex numbers, unsigned integers, etc. On modern machines, the default dtype for arrays is float64 a = np.zeros(3) type(a[0]) numpy.float64 If we want to use integers we can specify as follows: a = np.zeros(3, dtype=int) type(a[0]) numpy.int64 z = np.zeros(10) Here z is a flat array with no dimension — neither row nor column vector. The dimension is recorded in the shape attribute, which is a tuple z.shape (10,) Here the shape tuple has only one element, which is the length of the array (tuples with one element end with a comma). To give it dimension, we can change the shape attribute z.shape = (10, 1) z array([[0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.]]) z = np.zeros(4) z.shape = (2, 2) z array([[0., 0.], [0., 0.]]) z = np.empty(3) z array([0., 0., 0.]) The numbers you see here are garbage values. (Python allocates 3 contiguous 64 bit pieces of memory, and the existing contents of those memory slots are interpreted as float64 values) To set up a grid of evenly spaced numbers use np.linspace z = np.linspace(2, 4, 5) # From 2 to 4, with 5 elements To create an identity matrix use either np.identity or np.eye z = np.identity(2) z array([[1., 0.], [0., 1.]]) In addition, NumPy arrays can be created from Python lists, tuples, etc. using np.array z = np.array([10, 20]) # ndarray from Python list z array([10, 20]) type(z) numpy.ndarray z = np.array((10, 20), dtype=float) # Here 'float' is equivalent to 'np.float64' z array([10., 20.]) z = np.array([[1, 2], [3, 4]]) # 2D array from a list of lists z array([[1, 2], [3, 4]]) See also np.asarray, which performs a similar function, but does not make a distinct copy of data already in a NumPy array. na = np.linspace(10, 20, 2) na is np.asarray(na) # Does not copy NumPy arrays True na is np.array(na) # Does make a new copy --- perhaps unnecessarily False To read in the array data from a text file containing numeric data use np.loadtxt or np.genfromtxt—see the documentation for details. z = np.linspace(1, 2, 5) z array([1. , 1.25, 1.5 , 1.75, 2. ]) z[0] 1.0 z[0:2] # Two elements, starting at element 0 array([1. , 1.25]) z[-1] 2.0 For 2D arrays the index syntax is as follows: z = np.array([[1, 2], [3, 4]]) z array([[1, 2], [3, 4]]) z[0, 0] 1 z[0, 1] 2 And so on. Note that indices are still zero-based, to maintain compatibility with Python sequences. Columns and rows can be extracted as follows z[0, :] array([1, 2]) z[:, 1] array([2, 4]) NumPy arrays of integers can also be used to extract elements z = np.linspace(2, 4, 5) z array([2. , 2.5, 3. , 3.5, 4. ]) indices = np.array((0, 2, 3)) z[indices] array([2. , 3. , 3.5]) Finally, an array of dtype bool can be used to extract elements z array([2. , 2.5, 3. , 3.5, 4. ]) d = np.array([0, 1, 1, 0, 0], dtype=bool) d array([False, True, True, False, False]) z[d] array([2.5, 3. ]) We’ll see why this is useful below. An aside: all elements of an array can be set equal to one number using slice notation z = np.empty(3) z array([2. , 3. , 3.5]) z[:] = 42 z array([42., 42., 42.]) a = np.array((4, 3, 2, 1)) a array([4, 3, 2, 1]) a.sort() # Sorts a in place a array([1, 2, 3, 4]) a.sum() # Sum 10 a.mean() # Mean 2.5 a.max() # Max 4 a.argmax() # Returns the index of the maximal element 3 a.cumsum() # Cumulative sum of the elements of a array([ 1, 3, 6, 10]) a.cumprod() # Cumulative product of the elements of a array([ 1, 2, 6, 24]) a.var() # Variance 1.25 a.std() # Standard deviation 1.118033988749895 a.shape = (2, 2) a.T # Equivalent to a.transpose() array([[1, 3], [2, 4]]) Another method worth knowing is searchsorted(). If z is a nondecreasing array, then z.searchsorted(a) returns the index of the first element of z that is >= a z = np.linspace(2, 4, 5) z array([2. , 2.5, 3. , 3.5, 4. ]) z.searchsorted(2.2) 1 Many of the methods discussed above have equivalent functions in the NumPy namespace a = np.array((4, 3, 2, 1)) np.sum(a) 10 np.mean(a) 2.5 a = np.array([1, 2, 3, 4]) b = np.array([5, 6, 7, 8]) a + b array([ 6, 8, 10, 12]) a * b array([ 5, 12, 21, 32]) We can add a scalar to each element as follows a + 10 array([11, 12, 13, 14]) Scalar multiplication is similar a * 10 array([10, 20, 30, 40]) The two-dimensional arrays follow the same general rules A = np.ones((2, 2)) B = np.ones((2, 2)) A + B array([[2., 2.], [2., 2.]]) A + 10 array([[11., 11.], [11., 11.]]) A * B array([[1., 1.], [1., 1.]]) A = np.ones((2, 2)) B = np.ones((2, 2)) A @ B array([[2., 2.], [2., 2.]]) A = np.array((1, 2)) B = np.array((10, 20)) A @ B 50 In fact, we can use @ when one element is a Python list or tuple A = np.array(((1, 2), (3, 4))) A array([[1, 2], [3, 4]]) A @ (0, 1) array([2, 4]) Since we are post-multiplying, the tuple is treated as a column vector. a = np.array([42, 44]) a array([42, 44]) a[-1] = 0 # Change last element to 0 a array([42, 0]) Mutability leads to the following behavior (which can be shocking to MATLAB programmers…) a = np.random.randn(3) a array([-0.42368744, -0.52392363, 2.16456461]) b = a b[0] = 0.0 a array([ 0. , -0.52392363, 2.16456461]) What’s happened is that we have changed a by changing b. The name b is bound to a and becomes just another reference to the array (the Python assignment model is described in more detail later in the course). Hence, it has equal rights to make changes to that array. This is in fact the most sensible default behavior! It means that we pass around only pointers to data, rather than making copies. Making copies is expensive in terms of both speed and memory. a = np.random.randn(3) a array([ 0.48331921, -1.65614863, 0.05319921]) b = np.copy(a) b array([ 0.48331921, -1.65614863, 0.05319921]) Now b is an independent copy (called a deep copy) b[:] = 1 b array([1., 1., 1.]) a array([ 0.48331921, -1.65614863, 0.05319921]) Note that the change to b has not affected a. z = np.array([1, 2, 3]) np.sin(z) array([0.84147098, 0.90929743, 0.14112001]) This eliminates the need for explicit element-by-element loops such as n = len(z) y = np.empty(n) for i in range(n): y[i] = np.sin(z[i]) Because they act element-wise on arrays, these functions are called vectorized functions. In NumPy-speak, they are also called ufuncs, which stands for “universal functions”. As we saw above, the usual arithmetic operations ( +, *, etc.) also work element-wise, and combining these with the ufuncs gives a very large set of fast element-wise functions. z array([1, 2, 3]) (1 / np.sqrt(2 * np.pi)) * np.exp(- 0.5 * z**2) array([0.24197072, 0.05399097, 0.00443185]) Not all user-defined functions will act element-wise. For example, passing the function f defined below a NumPy array causes a ValueError def f(x): return 1 if x > 0 else 0 The NumPy function np.where provides a vectorized alternative: x = np.random.randn(4) x array([ 0.90652626, 0.29630732, 1.9942756 , -0.96809223]) np.where(x > 0, 1, 0) # Insert 1 if x > 0 true, otherwise 0 array([1, 1, 1, 0]) You can also use np.vectorize to vectorize a given function def f(x): return 1 if x > 0 else 0 f = np.vectorize(f) f(x) # Passing the same vector x as in the previous example array([1, 1, 1, 0]) However, this approach doesn’t always obtain the same speed as a more carefully crafted vectorized function. z = np.array([2, 3]) y = np.array([2, 3]) z == y array([ True, True]) y[0] = 5 z == y array([False, True]) z != y array([ True, False]) The situation is similar for >, <, >= and <=. We can also do comparisons against scalars z = np.linspace(0, 10, 5) z array([ 0. , 2.5, 5. , 7.5, 10. ]) z > 3 array([False, False, True, True, True]) This is particularly useful for conditional extraction b = z > 3 b array([False, False, True, True, True]) z[b] array([ 5. , 7.5, 10. ]) Of course we can—and frequently do—perform this in one step z[z > 3] array([ 5. , 7.5, 10. ]) z = np.random.randn(10000) # Generate standard normals y = np.random.binomial(10, 0.5, size=1000) # 1,000 draws from Bin(10, 0.5) y.mean() 5.029 Another commonly used subpackage is np.linalg A = np.array([[1, 2], [3, 4]]) np.linalg.det(A) # Compute the determinant -2.0000000000000004 np.linalg.inv(A) # Compute the inverse array([[-2. , 1. ], [ 1.5, -0.5]]) Much of this functionality is also available in SciPy, a collection of modules that are built on top of NumPy. We’ll cover the SciPy versions in more detail soon. For a comprehensive list of what’s available in NumPy see this documentation. Exercise 1¶ Consider the polynomial expression $$ p(x) = a_0 + a_1 x + a_2 x^2 + \cdots a_N x^N = \sum_{n=0}^N a_n x^n \tag{1} $$ Earlier, you wrote a simple function p(x, coeff) to evaluate (1) without considering efficiency. Now write a new function that does the same job, but uses NumPy arrays and array operations for its computations, rather than any form of Python loop. (Such functionality is already implemented as np.poly1d, but for the sake of the exercise don’t use this class) - Hint: Use np.cumprod() Exercise 2¶ Let q be a NumPy array of length n with q.sum() == 1. Suppose that q represents a probability mass function. We wish to generate a discrete random variable $ x $ such that $ \mathbb P\{x = i\} = q_i $. In other words, x takes values in range(len(q)) and x = i with probability q[i]. The standard (inverse transform) algorithm is as follows: - Divide the unit interval $ [0, 1] $ into $ n $ subintervals $ I_0, I_1, \ldots, I_{n-1} $ such that the length of $ I_i $ is $ q_i $. - Draw a uniform random variable $ U $ on $ [0, 1] $ and return the $ i $ such that $ U \in I_i $. The probability of drawing $ i $ is the length of $ I_i $, which is equal to $ q_i $. We can implement the algorithm as follows from random import uniform def sample(q): a = 0.0 U = uniform(0, 1) for i in range(len(q)): if a < U <= a + q[i]: return i a = a + q[i] If you can’t see how this works, try thinking through the flow for a simple example, such as q = [0.25, 0.75] It helps to sketch the intervals on paper. Your exercise is to speed it up using NumPy, avoiding explicit loops - Hint: Use np.searchsortedand np.cumsum If you can, implement the functionality as a class called discreteRV, where - the data for an instance of the class is the vector of probabilities q - the class has a draw()method, which returns one draw according to the algorithm described above If you can, write the method so that draw(k) returns k draws from q. Exercise 3¶ Recall our earlier discussion of the empirical cumulative distribution function. Your task is to - Make the __call__method more efficient using NumPy. - Add a method that plots the ECDF over $ [a, b] $, where $ a $ and $ b $ are method parameters. import matplotlib.pyplot as plt %matplotlib inline def p(x, coef): X = np.ones_like(coef) X[1:] = x y = np.cumprod(X) # y = [1, x, x**2,...] return coef @ y Let’s test it x = 2 coef = np.linspace(2, 4, 3) print(coef) print(p(x, coef)) # For comparison q = np.poly1d(np.flip(coef)) print(q(x)) [2. 3. 4.] 24.0 24.0 from numpy import cumsum from numpy.random import uniform class DiscreteRV: """ Generates an array of draws from a discrete random variable with vector of probabilities given by q. """ def __init__(self, q): """ The argument q is a NumPy array, or array like, nonnegative and sums to 1 """ self.q = q self.Q = cumsum(q) def draw(self, k=1): """ Returns k draws from q. For each such draw, the value i is returned with probability q[i]. """ return self.Q.searchsorted(uniform(0, 1, size=k)) The logic is not obvious, but if you take your time and read it slowly, you will understand. There is a problem here, however. Suppose that q is altered after an instance of discreteRV is created, for example by q = (0.1, 0.9) d = DiscreteRV(q) d.q = (0.5, 0.5) The problem is that Q does not change accordingly, and Q is the data used in the draw method. To deal with this, one option is to compute Q every time the draw method is called. But this is inefficient relative to computing Q once-off. A better option is to use descriptors. A solution from the quantecon library using descriptors that behaves as we desire can be found here. """ Modifies ecdf.py from QuantEcon to add in a plot method """ class ECDF: """ One-dimensional empirical distribution function given a vector of observations. Parameters ---------- observations : array_like An array of observations Attributes ---------- observations : array_like An array of observations """ def __init__(self, observations): self.observations = np.asarray(observations) def __call__(self, x): """ Evaluates the ecdf at x Parameters ---------- x : scalar(float) The x at which the ecdf is evaluated Returns ------- scalar(float) Fraction of the sample less than x """ return np.mean(self.observations <= x) def plot(self, a=None, b=None): """ Plot the ecdf on the interval [a, b]. Parameters ---------- a : scalar(float), optional(default=None) Lower endpoint of the plot interval b : scalar(float), optional(default=None) Upper endpoint of the plot interval """ # === choose reasonable interval if [a, b] not specified === # if a is None: a = self.observations.min() - self.observations.std() if b is None: b = self.observations.max() + self.observations.std() # === generate plot === # x_vals = np.linspace(a, b, num=100) f = np.vectorize(self.__call__) plt.plot(x_vals, f(x_vals)) plt.show() Here’s an example of usage X = np.random.randn(1000) F = ECDF(X) F.plot()
https://python.quantecon.org/numpy.html
CC-MAIN-2019-43
refinedweb
2,886
67.55
Its my first time to try Clarifai i just got this issue input code: from clarifai.rest import ClarifaiApp app = ClarifaiApp() #Predict with general model: app.tag_urls(['']) output: Traceback (most recent call last): File "C:\Python27\((Project))\clarifai-python-master\test\clarifai.py", line 1, in <module> from clarifai.rest import ClarifaiApp File "C:\Python27\((Project))\clarifai-python-master\test\clarifai.py", line 1, in <module> from clarifai.rest import ClarifaiApp ImportError: No module named rest any help plz? Hi @FunkyShadow! Did you do this first? pip install clarifai==2.0.22 I already have installed it .. and tried to uninstall and install it again Not working.... Hmm, ok. What OS are you using here? im using windows 8.1 64-bit, python 2.7 32-bit Hi @FunkyShadow - can you try doing a new pip install clarifai? We introduced a new fix recently and I think it'll fix this Hey there I'm also running into the same issue. I'm usingclarifai==2.0.24 from clarifai.rest import ClarifaiApp,Image as ClImage ModuleNotFoundError: No module named 'clarifai.rest'; 'clarifai' is not a package I'm uisng windows 10 64-bit python 3.6 64bit I guess I should also add import clarifai is working import clarifai.rest spits out the error not working as well. Hi @hli2020 - can you paste your code here so that we can troubleshoot further? Which OS and version of Python are you using? Hi @jared, here is the code: from clarifai.rest import ClarifaiApp app = ClarifaiApp() OS: linux 14.04 or Mac, python 3.6 Error:ModuleNotFoundError: No module named 'clarifai.rest'; 'clarifai' is not a package I installed as pip install clarifai. Tried solutions listed above and yet failed still. pip install clarifai Thanks. Hmm - are you able to test this with Python 2.7? If so, try running this command: sudo python -m pip install clarifai Did you find the solution for below error ? from clarifai.rest import ClarifaiAppImportError: No module named rest We figured this one out via chat, but for future reference it was because the test file was named clarifai.py and it was conflicting with our own clarifai.py
http://community.clarifai.com/t/no-module-named-rest/619/12
CC-MAIN-2018-51
refinedweb
363
62.14
development platform for Windows Phone 7. You can get it here. From there you also need to download the Windows Phone 7 developer toolkit, that plugs into Visual Studio. I am now set to create the first Windows phone application. I will start with building a simple Stopwatch application. This type of application is already readily available in many forms in the Windows Marketplace – but as I said, the goal for this post is to just get started with a simple application. I will call this application a Chronometer – maybe I will add more functionality later. Creating a new project, leveraging the template Creating the shell of the application is real easy – simple create a new project in Visual Studio and choose to create a Windows Phone Application – that’s all! Check out some screen shots below. You can just build and run the application. The application comes up in the simulator, and is pretty functional already – see the screens below! The one of the left gives you a default view of the application we are building – it has a default title for the application, and a default name for the starting page of the application. The simulator also includes fully functional buttons that you’d have on the phone. Clicking on the "windows” button shows you the start screen of the phone, complete with a tile for internet explorer (and that tile/app is fully functional too). Clicking on the navigation button next to the IE tile, you see the third screen which shows the applications already on the phone – and here you see that our chronometer application has also shown up (we should change the tile of the application here – it has simply picked up the title of the solution we have created in Visual Studio. Finally, you can click on the search button on the phone, and it brings up the Bing search application – once again, fully functional including the current day’s Bing picture and fully functional voice search capability. I love the Windows Phone 7 simulator in Visual Studio – it not only makes building your application really easy, but allows you to get a feel for it in an actual environment by giving a pretty functional feel of the Windows Phone 7 device! Few easy modifications to begin with – change the application name and titles Getting back to Visual Studio, you can see the XAML representation for the UI and the actual view displayed in the designer mode. You can simply click on the controls, check out their properties on the bottom-right of the screen, and change the texts. For example, I have changed the title of the application and the title of the main page below. I will build a stopwatch application for the main page. Let us also change the name of the application that shows up in the list of applications on the phone. Click on the name of the solution in the solutions explorer, and bring up the properties page. Click on the applications tab in the properties and change both the title in the “deployment” as well as the “Tile” options property sets – see below. Simply build and run the application again, and you can see all of the changes have been effected. Including the tilt of the application in the tile view (I simply double clicked on the application in the list on the left screen and pinned it to the start page). It is worth reflecting on how easy Visual Studio makes developing Windows Phone applications too! With just a few very simple steps we have created the structure of the application. Creating the Stopwatch display Let’s now get to the core of our application. I will first add the main display for the Stopwatch page. I intend to be in the format of “DD:HH:MM:SS:mmm” displaying days, hours, minutes, and milliseconds. Back in Visual Studio, bring down the toolbox of controls that you can choose from (via View.Toolbox menu) – see below. You have a rich set of controls to choose from. I am choosing the “Textbox” control for the main display – it allows me to display a rich text on the page. You simply double click the control from the Toolbox to get it on to the application page. Next, simply change the properties (I have changed the display text to be 00:00:00:000 by default, and have changed the font size to 64), resize the control to the desired size, and drag and position it to the location you want. The application display now looks as shown below. I now would like to add a start/pause button to finalize my first page for my simple Stopwatch application. For this, I simply create two icons – for play (or start) and pause – and create two JPG files for them. I used Windows Paint to create these two icons – each being a 60x60 in pixel size. I then add those two files to my solution via “Project.Add Existing Item…” menu Next, I add a button from the Toolbox to the application screen, position it and size it appropriately. At this point this is a simple button, and the page and the backing XAML looks as follows. However, instead of the Button being a simple Text button, I want the button to use an image – say the “play” image that I have created above. To do this, I simply add the “Image” tag to the Button definition directly in the XAML file shown above. The reworked XAML looks as follows – notice the <Image /> tag in the XML inserted in the definition of the <Button /> tag, and also that the play icon now shows up in the Button – we have transformed it into a an image based button. With this, we have really completed our application layout! Now we need to add a bit of code for the Stopwatch logic. Adding the code for the Stopwatch As you can imagine, all of the action will be initiated by the clicking of the play button. To create an event handler for the button, simply double click on the button in the display view for the XAML for the main page. It creates the following code in the backing code file. Now we will simply add the code in the event handler framework that has been created by Visual Studio. My focus here is not to write the most efficient code, but will rather focus on simplicity. Adding class references We will be using few special class from the System name space – namely Media for updating the image of the button, Threading to get a timer going, and Diagnostics to use the Stopwatch object built into .NET. I start of by adding these references in the MainPage.xaml.cs file A few member variables I next introduce a few member variables. These will store the day/hour/minute/second/millisecond values that I will display; the state of the stopwatch (started or paused) , a Stopwatch object, and a Timer object ; It also created two BitmapImage objects for the start and pause images that the button will display. The start/pause click logic Now, we move onto the code which executes when the play button is clicked. Note, this button is going to toggle between “start” and “pause” modes. When in the stopped state (the initial state), it toggles the state, sets the image in the button to the pause button, creates a new Timer object, and also starts the stopwatch timer in code. The Timer object is created with a callback method (updateDisplay) which we will discuss shortly. It also sets up the callback to be called with a reference to our TimerDisplay TextBlock so that we can update it. The penultimate parameter of 0 starts the timer straightway, and the last parameter of 100 sets it up to call the callback function at intervals of 100 milliseconds. The logic is similar and simple when the stopped state is false – it changes the image of the button to play, disposes the timer, and stops the stopwatch. Note that in our application, the stopwatch can be resumed from the point of pausing by clicking play again. The code is shown below. The Timer callback Finally, we are ready to plug in the last piece of code – the callback function – which will update the applications display. The code is shown below. We simply calculate the elapsed time from Stopwatch object, convert it into the appropriate formatted display string for the application The application is ready! With that, our app is ready! Simply build and run the application, to check it out on the Simulator. I show four screens below – the application listed in the list of applications, the starting screen, once the start button is clicked, and once the pause button is clicked What’s next? That was fun – and not bad, for a Saturday afternoon’s worth of coding. Visual Studio really does make building Windows 7 Phone applications quite easy. So, what’s next? There are many different ways I could extend this work – I could add more functionality to the stopwatch, or perhaps I could add another page to the application and add another functionality to the chronometer (may be a count down timer?). A demonstration of an application with two pages would be a good learning! And then of course there is the task of actually deploying the application to the app store, Perhaps I could also go into testing of phone applications. So, there are lots of ways I could take this – will wait for another lazy afternoon. Cheers! I am getting this erro: 'Stopwatch' is 'namespace' but is used like 'type' I followed exact same thing here but still getting this erro! Please change your Application namespace to MyStopwatch – this should solve the issue I will first add the main display for the Stopwatch page. I intend to be in the format of “DD:HH:MM:SS:mmm” displaying days, hours, minutes, and milliseconds. Error 1 The name 'timerdisplay' does not exist in the current context c:userssakthidocumentsvisual studio 2010ProjectsWindowsPhoneApplication9WindowsPhoneApplication9MainPage.xaml.cs 45 73 WindowsPhoneApplication9 Hello, I followed the guide and I am quite impressed. Right now I am building an Android application according to the follows: It uses an SDK with Adobe Flash Builder and there are code snippets that handle the physical buttons and the rotation features of the mobile devices. If you are planning to build an Android application have a look at it. There's a Logical problem in the app! The 1st second loads earlier than usual and 1 minute ticks on 30th second Looks like i found the Logical error: The 1st Second is fired on 500 ms time and then it goes normally. Similarly, The 1st Minute is fired on 30th sec. and the rest incrementation is done after a minute's intervals So the very 1st instance is being executed in 1/2 of the time and rest is carried normally and Here's the Cure of Logical error: Actually what we are using is the '/' for performing a Floating point division that returns a Double result. are then assigning it to an long round the value instead of truncate, So 500/1000 becomes 1 I recommend you to use ' ' for performing the integer division like ss = ms 1000; and same with others or else there's another way you can just use Stopwatch.elapsed and can use a Timespan with it like here's the example in vb that can be used within the updateDisplay method Dim e As New TimeSpan e = stopwatch.Elapsed MicroSeconds.Text = e.Milliseconds.ToString("00") Seconds.Text = e.Seconds.ToString("00") Minutes.Text = e.Minutes.ToString("00") Hours.Text = e.Hours.ToString("00")
https://blogs.msdn.microsoft.com/amit_chatterjee/2011/03/26/building-a-windows-phone-7-application/
CC-MAIN-2018-05
refinedweb
1,968
59.13
Many programming languages get along just fine without a return statement. In Pascal, for example, the return value is assigned to a dummy variable whose name equals the name of the function. function foo(arg: integer): integer; begin (* compute return value *) foo := retval; (* maybe some more cleanup *) end; In Scheme, the body of a function is a sequence of expressions, and the value of the last expression is returned as the result of the function call: (define foo (lambda (arg) expr1 expr2 .... retvalExpr)) And, of course, in assembly, the return value is simply deposited in a movl retval, %eax In Pascal and Scheme, you return to the caller when you reach the end of the function. There is no equivalent to the “quick and dirty” if (somethingAbnormal) return null; // more work in the normal case... This shows that there are really two aspects of returning: A closure is a block of code, packaged up for execution at a later point. When it executes, all the references to the surrounding code should just work as if the code had executed in the defining scope. Here is a typical example, using the BGGA 0.5 syntax (which, like all proposal syntax, is highly subject to change): public static void main(String[] args) { JFrame frame = new JFrame(); JButton button = new JButton("Click me!"); frame.add(button); int counter = 0; button.addActionListener({ ActionEvent e => counter++; frame.setTitle("Clicked " + counter + " times."); }); frame.pack(); frame.setVisible(true); } When the button is clicked, the closure gets called, the counter variable in the enclosing scope is updated, and the frame title is set to reflect the click count. Wait, there is a problem here. By the time the button is clicked, main has terminated and the local variable counter is dead and gone. Actually, though, the closure will capture a reference to a new int[1] containing the counter, and of course, a reference to the JFrame object. All this can be done with any of the various closure proposals, by gussying up inner classes with the ability to capture non-final locals. Unlike some other closures proposals, BGGA goes further and says that the closures also need to capture the meaning of execution transfer statements, i.e. break, continue, and return. (What about throw? That's never statically typed, so we don't expect to capture it.) At first glance, this seems like an odd thing to do. When the action listener executes a break statement, surely we don't want to go back in time and revive the main method (at least not until the proposal to add continuations to Java :-)) But it comes in handy for another use of closures: programmer-provided control statements. Let's say I want to provide an easy way of iterating over a matrix: for each(int i, int j: matrix) { // look, ma, no matrix[i].length! if (matrix[i][j] == 0) continue; . . . } This actually means: Pass matrix and the closure { int i, int j => if (matrix[i][j] == 0) continue; . . . } to the each method: public static void each(int[][] a, { int, int => void } block) { int i = 0; int j = 0; for (; i < a.length; j = (j == a[i].length ? 0 : j + 1), i = (i + ((j == 0) ? 1 : 0))) { block.invoke(i, j); } } Of course, now continue should mean the right thing: continue after the block, with the next iteration of the for statement. (Sorry about the tortured logic in the for update; one must use an assignment, increment, method call, or new expression. I suppose I could use a closure invocation { => if (j < a[i].length) j++; else { i++; j = 0; }}.invoke()...) Back to the topic of returns. A closure returns a value (if it has a result type). For example, { int x, int y => Math.max(x, y) } returns an integer, the max of its parameters. But if a closure contais a return statement, that means to return from the enclosing block. For example, { int x, int y => return Math.max(x, y); } is a closure with return type void that, when invoked, causes its caller to return the max of the parameters (or, presumably, if the caller can't return an int, throw an exception). Several commentators to my earlier blog point to this issue as the Achilles heel of the BGGA proposal. More unhappily, the closure { int x, int y => Math.max(x, y); } computes the max of its parameters, discards it, and returns no value. When I heard about that, my gut reaction was fear...the fear of students queuing up for my office hours. Ultimately, the culprit is the dual nature of return: yielding a value, and jumping to the end of the method code. In Pascal or Scheme, none of this is an issue. These languages have no return (or break or continue) to worry about. Let's try to throw some syntax at this. A BGGA closure body is a sequence of statements followed by an optional expression. Maybe that's too subtle. Let's make the return expression more prominent. Something like { int x, int y => int : stats => Math.max(x, y) } I already see the line outside my office getting shorter. (I suppose it would also allow early returns from a closure, but I don't want to go there...That's what got us in trouble in the first place.) But I have to agree with Stephen Colebourne that there are two entirely separate use cases here. When one uses a closure for a control abstraction, the return type must be void since the closure denotes a statement. And it is pretty clear that return means to return from the enclosing scope. When one uses a closure as a callback, to be invoked at a much later time, does one ever want to capture the enclosing semantics of return? I don't think so, but I will find out soon enough if I am wrong... It would make sense to differentiate these use cases. In a control abstraction, the programmer doesn't provide an explicit closure, but the compiler puts together a parameter list and a block. Conversely, when passing a closure to a callback, the programmer does the { ... => ... } thing. So, we can tell them apart. In the first case, the block can contain return, break, continue, labeled break, etc. Pile it on! In the second case, none of them should be allowed. It's just a syntax error. That should take care of the line outside my office. Students can wrestle with the compiler—what gets them is code that compiles and does the wrong thing. This is almost the same as the RestrictedFunction interface, except that you can capture non-final locals. It is also somewhat related to the distinction in BGGA control abstractions. The for control abstractions allow break and continue, whereas other control abstractions don't. If I understand the FCM/JCA proposal correctly, they have essentially the same solution. But the meaning of return changes from one use case to the other. I am not sure that's such a good idea. But again, it's just syntax. I am a total amateur at this, of course, as I and the world are sure to find out from the blog comments in a few hours. But it seems to me that after the tweaks that are sure to come, BGGA will differ very little from FCM/JCS, except for the issue of method literals. (These may be nice to have for other purposes. I'll warm up to them if someone can show me how they solve my pet peeve: property boilerplate.) FCM/JCA distinguishes the two use cases based on whether or not the "control invocation" syntax is used to invoke the method. This raises a few issues. First, you can only pass a single method that has the synchronous semantics (never two or more). Second, you (Cay) expressed a personal preference last week that the control invocation syntax should be usable for the asynchronous use cases, which undermines that syntax being used to make the distinction. Third, the idea that you might want to "return" from a closure early surely is independent of whether or not the closure's result value is of type void or which invocation syntax is used, but in FCM/JCA those are indeed linked. Nevertheless, I support the idea that it ought to be syntactically easier to yield a closure's result, early or otherwise. Having the meaning of any component of a closure change meaning when you switch between the two invocation forms is a severe constraint on the types of refactorings you can easily do using closures, and will probably be a disaster for code readability (but a boon for authors of puzzle books). We're going to avoid that if possible. Google for "Tennent's Correspondence Principle" for a discussion. Posted by: gafter on April 16, 2007 at 02:40 PM A good writeup, thanks. As mentioned, FCM/JCA distinguishes between callback-style and control-invocation-style. FCM is callback-style. A block of code can be assigned to a variable, or passed directly to a higher order method. We specifically name this feature an 'inner method' rather than a closure. It's semantics wrt return are those of an inner class method. This avoids the dreaded non-local return exception. We believe this works well and is familiar to the many existing Java developers out there, and is a natural semantic for Java. JCA is control-invocation-style, and an optional extension to FCM. Here the block of code appears to visually be part of a built in keyword, like foreach. With JCA, return will return from the enclosing method. There is no way to directly assign the block to a variable within the application code, and thus the opportunity for the non-local return exception is reduced. Again, the semantic for return is just logical in the context of Java, as it looks like a keyword, not an inner class. As you (Cay) note, this is two different semantics for return. And yet it works well in the context of Java where we have inner classes. Personally, I find the notion of 'RestrictedFunction' to be rather a nasty hack. Neal mentions that it is not possible to directly pass two closures to an API which both want to return from the enclosing method. My preferred answer is "Tough". There are possible solutions, but they make the overall result less safe. Neal also suggests that the use of return is separate from the calling syntax. This is a view based on implementing closures in Smalltalk or Ruby or Scala, where the language is designed around closures. I don't believe that Java is in that position - we need to respect what syntax and semantics work in a Java context. Oh, and method references (rather than method literals) won't help with boilerplate properties, but they will help with bean binding ;-) Posted by: scolebourne on April 16, 2007 at 04:05 PM Neal: I did indeed "express a personal preference" last week, out of complete ignorance. The question, if I recall correctly, was whether the audience preferred void launch(Executor ex) { ex.execute({ => doSomething(); }); } or void launch(Executor ex) { ex.execute() { doSomething(); }; } At the time, I looked at the cosmetics and found the second piece of code comfortingly familiar. But I can change my mind in the face of new evidence :-) I am now thinking that the control invocation syntax should only be used when a method has been specifically tagged to accept it...unless, of course, I see new evidence. Maybe you have an example where the same closure (containing a return) should plausibly be usable in both an asynchronous invocation and a control invocation?. Posted by: cayhorstmann on April 16, 2007 at 04:34 PM Cay, do you have the freedom to choose another language to teach in order to make the line outside your office shorter? In my experience, you don't want the line to be shorter anyway. You want it to be a fairly constant length throughout, rather than suddenly grow before deadlines. That's why I set an assignment each week (or every two or three weeks), but then my students are on a partially-technical course anyway, so they need that hand-holding. I'd like us to consider multi-piece method names again in the context of BGGA and JCA. For example, given a Maybe<String>, where I've stolen Maybe from Haskell, you could do this: ifJust(String string: maybeString) { return string.toLowerCase(); } else { throw new SomeException(); } It wouldn't need to actually be the word 'else'. 'or', or 'otherwise' would be fine. Posted by: ricky_clarkson on April 16, 2007 at 04:54 PM "Maybe you have an example where the same closure (containing a return) should plausibly be usable in both an asynchronous invocation and a control invocation?" Cay, I'm not sure if this is what you mean, but: SwingUtilities.invokeAndWait() { textField.flibble(); return textField.getText(); } or a little pointless race: ConcurrencyStuff.fork() { int a=0; for (int a=0;a Posted by: ricky_clarkson on April 16, 2007 at 05:01 PM I'll try that code again. ConcurrencyStuff.fork() { for (int a=0;a<1000;a++); return "first one won"; } for (int a=1;a<100000000;a*=2); return "second one won"; Posted by: ricky_clarkson on April 16, 2007 at 05:03 PM Ricky: I'll bite on the Swing example. invokeAndWait is synchronous. In your case, there is little value in returning textField.getText(). That value can equally well be obtained in the enclosing scope. SwingUtilities.invokeAndWait({ => textField.flibble(); }) return textField.getText(); Moreover, is also not something one does a lot. Usually, you use invokeLater, where clearly you do not want to return a value into a method that quite likely has long exited. The library designers could even support this distinction, by tagging an invokeAndWait method as suitable for receiving a closure with return. Posted by: cayhorstmann on April 16, 2007 at 06:23 PM Cay, if you say that SwingUtilities.invokeAndWait() would return the same value inside and outsite the method just because it is synchronous, you do not fully understand EDT threading and why SwingUtilities' invokeAndWait and invokeLater exists... Posted by: mikaelgrev on April 17, 2007 at 12:44 AM Cay said: ." BGGA requires an error in these contexts, due to the fact that the received interface will extend the new marker interface RestrictedClosure. This is independent of which syntax is used in the invocation. My personal preference is that the control invocation syntax not be allowed when the received interface extends RestrictedClosure. That was the point of my question about which syntax you prefer for the asynchronous case; it seems most people disagree with me and think the control invocation syntax is fine. Posted by: gafter on April 17, 2007 at 02:19 AM Mikael: You are right. Since the Runnable executes on the event dispatch thread, it is utterly pointless to have a non-local return since it would result in a UnmatchedNonlocalTransfer exception. That further strengthens my argument that non-local return is not usually what you want outside of control invocations. Neal: Maybe most people disagree with you on the preferred syntax because they don' fully understand the implications? I certainly didn't when you asked the question. NB. I am a bit confused about the RestrictedClosure interface. Why are you restricting non-final locals? Posted by: cayhorstmann on April 17, 2007 at 06:44 AM Today in Java, there's no way to return from a method from within an expression. That's a big part of why I think the ability to do so with BGGA nested closures will be confusing. I side with FCM/JCA on this. As for multiple synchronous closures, I think this is an unlikely use case. I'd rather make the common cases clearer and easier. That said, I _still_ suggest expression methods as a way to solve simple use cases for this. See ECMAScript 4's plan, for example. In Java, expressions can't contain return, break, or continue, so that subject goes away. I also don't think that people are likely to be switching back and forth between different closure syntaxes. People are likely to code with a use case in mind and go that direction. If you switch syntax, it's probably also because you are switching what you are doing to start with. Posted by: tompalmer on April 17, 2007 at 09:46 AM By the way, invokeAndWait() can work great with non-local return. What you'd get is an InvocationTargetException which the helper method can easily unwrap into the exception that automatically becomes a return. No big deal. Posted by: tompalmer on April 17, 2007 at 09:49 AM Concerning non-final locals, I have yet to see how they differ meaningfully from fields in final locals (except the annoyance level). Non-final for either synchronous or asynchronous (and for concurrent or non-concurrent) is just fine given current status quo. The fewer annoyances on this point, the better, I say (unless you really ask for them badly with compiler switches or a separate analysis tool maybe). Posted by: tompalmer on April 17, 2007 at 12:54 PM For handling UI events, I prefer the superior technology offered by Actions. While my example is a little more verbose (I eschew the anonymous inner class - as a not particularly OO solution), I pick up a whole bunch of functionality 'for free'. Eg assigning an Icon to the button is trivial through the action, multiple controls (a button and a menu item for instance) in the UI sharing the same method (just assign them all the same action), enabling and disabling all of the controls which share the action at the same time (just enable/disable the action). For large UIs having an OO approach helps avoid the 'spaghetti code' problem. Also, since separation of concerns is a big deal these days, note that the creation of the button and placement on the UI is now separated from its functionality. Oh, and if we want to dynamically change the buttons functionality, it is as simple as setting a new Action, which replaces the previous one (without messing up other ActionListeners). By making them proper classes I can now organise my actions by using the packaging mechanism. It is all good. It is so good in fact (for such little effort), that anyone using anonymous inner action listeners really needs to step up and provide a justification for why they would do that when there is a clearly superior alternative. public static void main(String[] args) { JFrame frame = new JFrame(); JButton button = new JButton(new FrameUpdatingAction("Click me!", frame); frame.add(button); frame.pack(); frame.setVisible(true); } class FrameUpdatingAction extends AbstractAction { protected JFrame frame; protected int counter; FrameUpdatingAction(String name, Frame f) { super(name); frame = f; } public void actionPerformed(ActionEvent e) { counter ++; frame.setTitle("Clicked " + counter + " times."); } } Posted by: rickcarson on April 17, 2007 at 05:26 PM Rick: I totally agree with you that extending AbstractAction is much smarter than ActionListener. My code was just a toy example to make a point about closures. Posted by: cayhorstmann on April 17, 2007 at 08:16 PM @Cay, You make some very good points, not least because I agree with them :) In my post: Comparing Inner Class/Closure Proposals I too used the example of a return with and without a keyword and with and without a ; as a problem with BGGA. In my own proposal, C3S, I suggest that named non-local returns should be used. Your example (in BGGA): int someMethod() { ... x = someOtherMethod( { int x, int y => return Math.max(x, y); } ); ... } That returns the Max from someMethod not someOtherMethod. In my C3S syntax you would do: int someMethod() { ... x = someOtherMethod( method { x, y => someMethod.return Math.max(x, y); } ); ... } The trick is to name the method to be returned from if it isn't the inner most method. This seems to solve all the issues and is more flexible than the FCM proposal and isn't a special case (only inside certain structures), I propose that an inner class could use a non-local return. Once you have short syntax for any inner class you really don't need a separate control syntax. In addition to short inner class syntax I propose optionally: dropping return if it isn't ambiguous, dropping () if it isn't ambiguous, and dropping ; before }. Therefore the control example of a matrix might be: each matrix, method ( i, j ) { if (matrix[i][j] == 0) { return } . . . }; Posted by: hlovatt on April 17, 2007 at 11:19 PM Cay, I am sorry but I still don't buy this closure frenzy. Personally even though I would preffer to use Action as quoted on previous post, to call an event handler I simply use the java.beans.EventHandler (existing since 1.4 !!!). This means the code would look like public class Foo extends JFrame{ int counter = 0; public Foo(){ JButton button = new JButton("Click me!"); this.add(button); button.addActionListener(EventHandler.create(ActionListener.class, this, "doStuff")); } public static void main(String[] args) { Foo foo = new Foo(); foo.pack(); foo.setVisible(true); } public doStuff(){ // get the data from the context and do what it is suppose to do } } Isn't this much more simple ? Plus built with existing features ! I ask to all the closure lobier to provide real world usefull examples that shows noticeable improvement effects and a major advantage of introducing a "closure" (improper name IMHO) feature into the Java language. To me, with all the functionality existing : anonymous class, inner class, even handler, proxies, genericity ... already existing in Java, at this time, there is no need for closure (until I'm shown a realy usefull example). If we need something it is : either a runtime aware genericity (something you can do dynamicity onto and get the realy parametric type at runtime to do automatic/generic tasks based on this : display, edit, ....) or AOP (EJB3 introduced early AOP to Java, we need a platform solution that will unify this from client side to server side). Posted by: bjb on April 17, 2007 at 11:33 PM Oops typo, sorry. Should have said: x = someOtherMethod( method( x, y ) { someMethod.return Math.max( x, y ); } ); Which if you choose to drop superflous () and ; then you could write: x = someOtherMethod method( x, y ) { someMethod.return Math.max x, y }; Posted by: hlovatt on April 18, 2007 at 12:03 AM @bjb & others You are right - the new inner class proposals including my own C3S add little other than syntatic sugar. Compared to some of the other prosals I think one of the strengths of C3S is a simple mechanical translation to existing code. So your example with minimal changes would be: public class Foo extends JFrame { private int counter = 0; public Foo() { final JButton button = new JButton( "Click me!" ); add( button ); button.addActionListener( method( notUsed ) { doStuff() } ); } public static void main( final String[] notUsed ) { final Foo foo = new Foo(); foo.pack(); foo.setVisible( true ); } public void doStuff() { // get the data from the context and do what it is suppose to do } } With more changes( dropping () and ; and using method more): public new { final button = JButton.new "Click me!"; add button; button.addActionListener method( notUsed ) { doStuff } } public static method void main( final String[] notUsed ) { final foo = Foo.new; foo.pack; foo.setVisible true } public method void doStuff { // get the data from the context and do what it is suppose to do } } Posted by: hlovatt on April 18, 2007 at 01:50 AM The truth is that closures are much more atractive as custom iteration and as arguments of custom callbacks that do something extra than as a replacement of the swing action structure, since this is already at the point that is more usefull than a callback could ever could be. I'd like that the actions listeners where made private api, but that is another story. Posted by: paulofaria on April 18, 2007 at 03:26 AM Concerning C3S, I love concise language, and I love Ruby. But changing the Java language dramatically is a bad idea in my opinion. C3S would be better planned out as a different language, and I doubt I'm alone in this opinion. Concerning Swing events, I agree that Actions are probably better than spaghetti closures. But I also agree with others that there are plenty more use cases for closures than that. (Like almost nonstop use cases in everyday code.) I also strongly dislike the use of reflection such as used by EventHandler. Static usage analysis is nice, and this makes a mess of that. As an experiment, here's an attempt at using a mock CSS/XBL-ish Swing utility library for Java 7 closures and properties: styler.addRule(Style style: ".stuff") { style.onAction(#{doStuff()}); style.enabled = false; style.fontWeight = BOLD; style.icon = myCoolIcon; } That probably needs more careful thinking to make it good, but it's not very hard to understand, and it's not so spaghetti either. Posted by: tompalmer on April 18, 2007 at 09:19 AM
http://weblogs.java.net/blog/cayhorstmann/archive/2007/04/whats_so_taxing.html
crawl-002
refinedweb
4,205
62.98
KylotanModerator Content count14138 Joined Last visited Days Won47 Community Reputation10165 Excellent About Kylotan - RankModerator - Scripting Languages and Game Mods Personal Information - Website - InterestsAudio Design Programming Social - Steamkylotan Are character artists are higher skilled than Environment artists? Kylotan replied to JustASpaceFiller's topic in 2D and 3D ArtNo. There are different skills involved but it's simply false to suggest that one type of artist is 'higher skilled' than another based purely on their title. AI Duplication Issues Kylotan replied to vex1202's topic in Artificial IntelligenceIt's not clear what all your public variables are, so it's hard to be sure. If 'enemy' was the same character as the one doing the AI, its easy to see how the last line of navigation would actually do nothing, for example. Use Debug.Log lines to see which code branches are being executed, and which values the code has at that time. C# Help with my first C# Hello World! Kylotan replied to TexasJack's topic in General and Gameplay ProgrammingSometimes it can be counterproductive to learn C# separately from Unity because the way you structure a program in Unity is very idiomatic. So bear that in mind. I am not sure what the differences between 'running' and 'building' code - Building code prepares it to be run. Running code actually executes the program. Often a button to 'Run' will check to see if it needs building, and will build it if necessary, and then run it. If you are seeing your program's output, it's running. There are several bits of jargon that keep being thrown around, without much in the way of explanation: 'Classes', 'The Main Method', 'Namespace'. - These are big, complex concepts and you're not going to get explanation from tool-tips or whatever. This is a whole programming language with books dedicated to it. If you don't want to get a book your best bet is to get used to using the online references, for example here: In the meantime: class - a grouping of data with the functions that operate on that data. Objects in your program are usually instances of a class. Each class might have zero, one, or many instances. main method - the function that gets called at the very start of your program, and that then calls other functions to get everything done. namespace - a way to group classes together under a related name for ease of use and to avoid naming clashes. line-by-line explanation - why did you write it if there wasn't an explanation for it? Try finding a tutorial that actually explains what it is telling you do to. Don't get into the habit of typing code from the internet unless someone is explaining what it does. "using system" - this gives you access to a bunch of pre-defined objects in the 'system' namespace. In this case, the only one you're using is 'console' "namespace helloworldtwo" - this puts everything inside the next pair of braces into the 'helloworldtwo' namespace. In this code, this actually does nothing special, but if code in other files wanted to access the class or function in here, it would need to prefix it with "helloworldtwo", or use a "using" statement like you did with 'system'. "class Mainclass" - this denotes the start of a class called 'Mainclass'. The name is arbitrary. A class just groups data and functions (functions are usually called 'methods' when in a class), but in this case there is no data, so it's just holding the function. "public static void Main (string[] args) - " this starts to define a function. The function is public, meaning any part of your program can access it, static, meaning you don't need a specific instance of that class to use the function, it returns 'void', which is a fancy way of saying 'nothing', and it accepts an array of strings as an argument, and calls them 'args'. Note however that your function currently ignores those strings. A function is a block of code which takes some arbitrary input, does some stuff with it, and then returns some arbitrary output. In mathematics, an example of a very simple function is "square-root". If you pass 25 into a square root function, you get 5 out. And so on. In programming you can also have statements that cause side-effects - like Console.WriteLine - so your functions can do more than just return values, and this makes it practical to build entire programs out of functions. "Console.WriteLine ("Hello World!")" - you already figured this bit out. - The point of this sort of system is that a state update takes some length of time (say, 'M') to reach the client from the server, and then some length of time (say, 'N') before the client fully reflects that change, to help ensure the client has received another state update from the server before the current one is reached. You can adjust N slightly on a per-message basis to account for fluctuations in M. In other words, if a message arrives sooner than expected, you might want to take longer to interpolate towards it, and vice versa. This keeps movement smooth. It is reasonable to consider decreasing N during play so that it's not too long and imposing unnecessary client-side latency. This will be determined by M and your send rate. It's also reasonable consider increasing N during play so that it's not too short and leaving entities stuttering around, paused in the time gaps between reaching their previous snapshot position and receiving the next snapshot from the server. This is determined by the variance of M (jitter) and your send rate. Often N is left at a fixed value (perhaps set in the configuration), picked by developers as a tradeoff between the amount of network jitter they expect to contend with, and the degree of responsiveness the players need to have. And if you're happy occasionally extrapolating instead of just interpolating, i.e. you are happy to take less accuracy for less latency, then you can reduce N down even further. The idea to "catch up earlier" doesn't make sense in isolation. Earlier than what? The idea is that you always allow yourself a certain amount of time to interpolate smoothly towards the next snapshot because you still want to be interpolating when the subsequent one comes in. You don't want to be decreasing that delay over time because of the stuttering problem above, unless you have sufficient information to be able to do so. C++ Savegame headers? Kylotan replied to suliman's topic in General and Gameplay ProgrammingThe first way makes a lot more sense. I would like to hear more about your day-to-day experience at Game Developer / Designer jobs. Kylotan replied to Alexander Kirchner's topic in Games Career DevelopmentIf you want to self-teach yourself game programming, be aware that this is likely to take quite a while before you are at an employable level. Also be aware that the work done by designers and the work done by programmers is very different. Then the games industry is probably not for you. On more than one project I've worked on, I've been assigned tasks where that consisted of a single sentence like "Implement dialogue windows" or "Fix NPC animation" with no indication of what that truly means. The implication is that you can work it out for yourself, if you dig into it and speak to enough people. If you're really lucky there's something written about it somewhere (most likely an email thread you weren't included on, 3 months ago.) Then what inevitably happens is do you do the work, commit the work, and someone says "no, I didn't mean like that. I wanted it more like...<whatever>" Repeat until shipping day. As for tedious detail work... sorry, there's plenty of that. Type just 1 letter wrong in your code? It won't build. Or will crash. Set one value wrong on a material? It'll look wrong, and you'll have to dig through to find and fix it. Game development is complex, detailed business. You're not going to find this in the games industry, sorry. You don't get to be the ideas person without spending years in the "finalize everything" trenches. Why would we trust someone to come up with ideas if they have no appreciation of how they would be implemented? Onto your questions (though I feel the answers are now irrelevant): In your daily work life, do you feel like you are being stimulated with new task frequently, or do you mainly work on similar tasks? Depends entirely on your viewpoint. Some people in other industries are amazed that I have to work on the same project, inside the same codebase, for years on end. Every day I am "writing code", so that's the same task. Sometimes it's a more interesting feature, sometimes it is not. Do you feel like you have the possibility to innovate and bring in new ideas in your job / task? Yes, but that's because I'm a senior member of a small team. As a junior member getting into the industry your expectations have to be much lower. Do you feel that, for the most part, your position has clearly described activities, or are you mainly taking on various roles that are currently needed? Again, this is a matter of perspective. My job is 98% programming; so that is 'clearly described'. Is what I have to deliver clearly described? No, not at all. Do you get a lot of feedback on performance or progress? Or is your work mainly done 'when its done'? Both. Most developers are expected to take responsibility for monitoring their own progress and delivering tasks on time, but there will be feedback on what is delivered, at some stage. - No, because that's not necessary, nor does it make sense. If the object is moving, then your local interpolated data will continually be following those positions, N milliseconds in arrears, where N is the length of the buffer you chose. If the object is not moving, then your local version will catch up to the final position within N milliseconds. There is no need to change the speed of anything. - If you were seeing discontinuities in the position, then you weren't treating the currently interpolated position as the new start position, or you weren't resetting the interpolated time at that point. If you were seeing the object continually stop and start, then your updates were arriving too late to ensure that there was always a future position queued up. One good way to debug this sort of issue is to slow everything down - drop the network send rate to something like once per second, set your update rate at something like 20 or 30 frames per second, scale up all other time values accordingly, and do a lot of logging to see when things change in ways you don't expect. Get it working correctly at the slower speeds, where it's easier to debug any issues. Then ramp up the speed to what you actually need. Also, ensure you understand how to interpolate properly. If your interpolation looks like Lerp(start, end, 0.5) (or any other constant) then that is not truly interpolating linearly between the values over time. If this isn't enough information to help you, then you will probably have to show your snapshot handling and interpolation code. Unreal Memory Allocation Kylotan replied to SillyCow's topic in Engines and MiddlewareI don't think you should expect to get that sort of low level access to the engine's memory management. I suspect that you will either need to use a different JPEG decoder, or find a different way to achieve your aims here (e.g. decompress the images ahead of time, use a video codec, etc) - It's not clear why you didn't try the simplest solution to the original problem: when a new future position comes in from the server, you lerp directly towards that from wherever the client representation is now. In other words, the currently lerped position becomes the 'now, T=0' position and the received position becomes the 'later, T=N' position, where N is some number of milliseconds. The reasoning for using a time in the future is because of the following: if you instantly snap to whatever position the server reports then it's obviously going to look very jerky - so you want to smooth that over some period of time. (Let's call that a 'time buffer' for the purposes of this post.) You interpolate between a start position and a next position. that time buffer effectively adds more latency, because your client has that extra delay before it moves the entity where it needs to be. So you want the buffer to be as short as possible to reduce this. On the other hand, you always want to have a 'next position' queued up in the future or the entity will stop moving while it waits for the next snapshot to come in. (If you think about it, the 'instant snap' situation can be considered a special case where the interpolation is over 0 milliseconds.) So, you choose a time buffer length that is short enough to not add too much extra latency, but long enough that you always have a next position from the server queued up, even in the face of varying message ping/latency (known as 'jitter') Regarding Pong - I think this method would be fine. You have very few entities to update and therefore very low bandwidth requirements, so I'd just suggest sending messages as frequently as possible. - The pros and cons are exactly what you'd think they are. e.g. If you have a web interface, the pros are that you get to use your own browser, the cons are that you need to do everything via HTTP and HTML. If you have a console interface, the pros are that it's very simple text in, text out. The cons are that it's just text in, text out. You need to think about what you want from the tool - again, depending entirely on what type of server you're talking about, where you're hosting it, what kind of people need to query it, and decide on the interface you need in order to meet those requirements. - Yes, you do typically need some way to query the server. What you need depends on what type of server you're talking about, where you're hosting it, what kind of people need to query it, etc. You could create a specialised client, you could have a console style login (e.g. with SSH), you might have a web interface, etc. Accounting for lost packets? Kylotan replied to Substance12's topic in Networking and MultiplayerJust to add to what was said above: The 100ms server-side delay seems like the wrong thing to do. Just apply the data when it arrives. If you ever receive a message that is out of order - i.e. you have already handled message 10, but now message 9 arrives - just drop it. A client-side delay however is a useful tool to reduce the effects of varying transmission speeds (aka "jitter"). The idea is usually to treat any received data as applying to some future time, so that you will always have 1 or 2 future states to blend smoothly towards. Note that a fixed number of milliseconds after receipt is probably less optimal than a varying time after receipt, where the variance takes transmission time into account. If each message is stamped with the server's sending time, you can get an idea of which messages are arriving 'early' and which are arriving 'late', and also get a feel for how big the delay needs to be in order to cover this variation. If you're sending snapshots, and sending them less often than you collect them, bear in mind there's no point sending the earlier ones - their information is superceded by the newer data. 50 milliseconds doesn't mean '30 tickrate' unless someone changed the duration of a second recently. Unreal Learning Unreal C++ development Kylotan replied to SillyCow's topic in Engines and MiddlewareWhen working with UE4, you practically have to use Blueprints. They're not an alternative to code, they're the backbone of the system. Even if you went the hard route and made all your logic in code, you would still almost certainly be interfacing with the Blueprint system for object instantiation, not to mention all the other parts of the visual scripting system. UE4 is not a code framework with some free graphical tools thrown in - it's a full development environment with hooks for you to add code, with C++ almost considered a scripting language. So unfortunately my answer to you is to get used to Blueprints. You can still work from examples but instead of copy and pasting you will need to re-create example Blueprints. This is actually very quick to do given that each node can be created in about 10 seconds and joining 2 nodes together takes under a second. Once you're more comfortable editing Blueprints, you could then practise extending the system - perhaps create a new component and add it to an actor, giving it some UPROPERTY values so you can set them in the editor. Then create a Blueprint-exposed function, and then call that function from within a Blueprint, e.g. from an Event. Maybe give the component a Tick and have it perform some game-loop processing, altering other components. Etc. Once a programmer is very familiar with the Unreal object model, when events are called, and how each part of the system interacts with other parts, it's possible to start converting a lot of the logic away from Blueprints and into code - but nobody is expected to start with a 'code only' approach. i need learn more about the Game Loop Kylotan replied to cambalinho's topic in For BeginnersJust to take this back to a more fundamental level... At a very basic level, for pretty much any task, computers work like this: Collect input -> Process data based on input -> Display output Lots of tasks require - or at least benefit from - repeating this process so that new input can be processed, and perhaps so the user can view the output and provide different input based on it. Collect input -> Process data based on input -> Display output -> Repeat from start Real-time systems like computer games and simulations work this way, with the additional constraint that they have some sort of hard or soft 'deadline'. In a computer game, the deadlines are typically 'soft' (in that the program doesn't break entirely if they are missed) but they are quite short, e.g. 33ms for a 30fps game or 16ms for a 60fps game. So the loop is executed with this deadline in mind. Note that on personal computers it's impractical to guarantee that each loop iteration takes a precise amount of time, so you normally aim for an arbitrary deadline but be prepared to measure the actual time taken and process with that in mind instead. Collect input for next 16ms -> Process data based on input to cover the next 16ms -> Display output for the next 16ms-> Repeat from start (16ms can be swapped for any other small value, constant or variable) Each of these iterations is generally called a 'frame' because you get one iteration for every one frame rendered to the screen (generally). So, the way you would make an enemy move, in this basic system, is to recognise that a moved enemy is a type of data processing (i.e. the position data changes), and that the movement covers a certain time span (how far can an enemy move in 16ms? Or however long your frame took?) Each time though the loop, you move the enemy that tiny amount, then display the new position on the screen, and repeat.
https://www.gamedev.net/profile/2996-kylotan/?tab=classifieds
CC-MAIN-2018-05
refinedweb
3,385
59.74
I’m trying to make an algorithm that finds the first 10 or so terms of a function’s Taylor series, which requires finding the nth derivative of the function for the nth term. It’s easy to implement derivatives by following the definition of the derivative: $$f'(x) = lim_{hto0}dfrac{f(x+h)-f(x)}{h}$$ implemented here in Python: dx = 0.001 def derivative(f, x): return (f(x + dx) - f(x)) / dx The value seems to be even closer to the actual value of the derivative if we define it like this: dx = 0.001 def derivative(f, x): return (f(x + dx) - f(x - dx)) / (2 * dx) which just returns the average of (f(x + dx) - f(x)) / dx and (f(x) - f(x - dx)) / dx. For higher order derivatives, I implemented a simple recursive function: dx = 0.001 def nthDerivative(f, n, x): if n == 0: return f(x) return (derivative(f, n - 1, x + dx) - derivative(f, n - 1, x - dx)) / (2 * dx) I tested the higher order derivatives of $f$ at $1$, where $f(x)=x^9$, and as can be proved by induction, $$dfrac{d^n}{dx^n}(x^k)=dfrac{k!}{(k – n)!}x^{k-n}$$ Therefore, the nth derivative of $f$ at $1$ is $dfrac{9!}{(9 – n)!}$. Here are the values returned by the function for n ranging from 0 to 9: n Value Intended value ----------------------------------- 0 1.000 1 1 9.000 9 2 72.001 72 3 504.008 504 4 3024.040 3024 5 15120.252 15120 6 60437.602 60480 7 82298.612 181440 8 32278187.177 362880 9 95496943657.736 362880 As you can see, the values are waaaay off for $n$ greater than $5$. What can I do to get closer to the actual values? And is there an algorithm for this that doesn’t have $O(2^n)$ performance like mine?
https://proxies-free.com/mathematical-programming-precise-algorithm-for-finding-higher-order-derivatives/
CC-MAIN-2021-25
refinedweb
320
75.1
Company: Connell Insurance Software: VMware vSphere Hardware: Apple Mac Mini [William] - Hi Bryan, thanks for your time this afternoon. I know you have been pretty busy these last couple of weeks and I am glad we got some time with you to chat about your environment. Before we get started, can you please introduce yourself and what you are currently responsible for? [Bryan] - Hi William. Thanks for the interest! My name is Bryan Linton. I'm currently the IT Director for Connell Insurance, an independent Insurance Agency in Southwest Missouri. We’ve been around for a long time but have been growing more rapidly in recent years. We currently have around 40 employees. We have most of our systems in a secure off-site datacenter, but I still need some support systems onsite, and that's where your project bringing together inexpensive Mac Minis with ESXi caught my attention. [William] - Can you tell us a bit more about your Mac Mini infrastructure? What is the hardware and software configuration and the type of workload you are currently running on them? [Bryan] – First of all, I didn't buy the "server" versions of the Mac Mini - I just ordered the standard Mac Minis with stock RAM and storage. My only exception to going stock was on the CPU. For that, I went all-out and ordered the i7 quad-core processors since I knew I’d be using them as servers. But I bought a large SSD, a data doubler kit that allowed me to mount the second hdd, and 16GB of RAM, and installed all those myself, since it was cheaper than spec’ing it that way from Apple. I also bought a 16GB low-profile USB flash drive to use as my install point and boot device. It's working fine booting and running ESXi 5.5. I’ve always used ESXi embedded on my servers going all the way back to version 3.5, so I was already comfortable and familiar with booting and running ESXi from a flash drive. So to summarize the hardware, that's a quad-core i7, 16GB of RAM, a 1TB spinning HDD and a 500GB SSD as datastores, booting ESXi 5.5 from a tiny flash drive that barely protrudes from its USB port. On the software side, I downloaded your turnkey ISO for ESXi 5.5 for the Mac Mini. Your advice to enable support for the Thunderbolt Ethernet adapter was easy to follow, and that gave me two 1GigE NICs for whatever I need. The installation was about as simple as you can get. My workload currently is very light. We just opened a branch office, so that’s where I’m using the Mac Mini. I installed a DC for the site, and have set up just one other support machine so far. It runs the management software for our video surveillance, alarm system remote administration, and the controller software for my WiFi access points. I’ll probably add (or move) other support roles to this machine before long. It's not working that hard and to be honest, is probably faster than the 6-year-old Xeon-based server currently filling the support-system role inside our main office. The two offices are very well-connected, so I can move around support systems almost without concern for which site uses them most. So I do foresee loading up the Mac Mini with more work. Here is a picture of Bryan's setup: [William] - What was the deciding factor on choosing a Mac Mini versus Mac Pro or any other platform? Given the Mac Mini is consumer grade hardware and is currently not a supported platform, were there any architecture decisions you had to make on the infrastructure or application side to accommodate for this fact? [Bryan] – The Mac Pro is fantastic hardware, so if you need heavy-duty power behind your VMs it's certainly worth looking at. But if you really beef up a Mac Pro you're back in the cost realm of what server hardware typically costs. I honestly didn’t look at the compatibility or support of the Mac Pro with ESXi because we didn't need that kind of power for our new branch office. As for other platforms considered, we’re largely a Dell shop. We use their small-form-factor desktops for our user workstations. I considered using one of our retired workstations as a “server”, but was afraid it would be too slow, even with an SSD, and it didn't have any more RAM capacity than the Mac Mini. Plus it's obviously bigger and less power-efficient, and it’s probably less hardware-compatible with ESXi. That’s why we ultimately chose the Mac Mini. The main limitation of the consumer-grade hardware in the Mac Mini, for us, was RAM. The Xeon processors in my "proper" servers running in our CoLo aren't ever taxed nearly as much as the other compute resources, so to me the i7 quad-core CPU seemed more than adequate. Having the RAM maxed out at only 16GB, though, made me put extra thought into how I can make the best use of ESXi’s transparent page sharing between VMs. We’re mostly a Windows shop, so it made sense to me to standardize on a single Windows Server version for a given Mac Mini, and try to keep all those VMs at the same patch level. That way, the odds are better that many of the core OS pages in RAM will be identical, meaning the use of TPS will increase, and more RAM will be available to run applications (or even more VMs). We chose Server 2012 Standard as the go-to Windows server OS, and I look forward to loading up the Mac Mini to see just how far ESXi’s advanced memory management techniques will let me push it. The other constraint was network connectivity. For example, I have just 2 NICs, and I'm currently in the process of working out a backup strategy. I think I'm going to use a backup appliance running on the Mac Mini with a relatively inexpensive NAS as the backup destination, and if needed I can dedicate a physical NIC to that, but with only two NICS, I have to think a lot about network traffic management. I have a couple of other ideas that involve using a thunderbolt dock and/or USB 3.0 NICs to increase the number of USB ports and NICs available to me. The big question there is driver support in ESXi, but I haven’t yet researched or tested those ideas at all. [William] - You mentioned the current workload is pretty light and you plan to deploy additional Window Servers. What type of workloads will these Virtual Machines be running? Do you see a need to scale up your Mac Mini infrastructure from what you have today? [Bryan] - I'm not yet ready to put critical production data on them. Not until I have a better feel for what backup looks like, and until I really push the limits of the hardware resources. But support systems like I mentioned above - Wi-Fi controller, NVR for surveillance (with the data stored off-box), things like AV management, Spiceworks, syslog servers, additional DCs to provide a global catalog server, a DFS namespace server, and maybe a replica file server synced with DFS replication are all good possible candidates for running on a Mac Mini. We have an email archiving system that has to run somewhere. Its data is stored on a remote share and DB, but the processing can happen anywhere, and Mac Mini will handle it fine. That lets me keep resources free for the mission-critical apps that run on my "real" servers that DO contain our critical production data. If I can improve user experience by offloading non-critical support systems to the Mac Mini, there's less resource contention on my vCenter-managed hosts, and user responsiveness should benefit from that. I also have software firewalls that I use to create a "double-router, Dual-NAT" environment so I can run or build machines in a test, isolated environment, with internet connectivity, with the IP addresses they use (or will use) in production without conflict. They run in perfect isolation. The Mac Mini can host those software firewalls along with any machines that are either being built, or perhaps being restored from backup for exploratory purposes, or even for testing of upgrades or new software in a mirror environment before it goes into production. As for scaling up, so far I haven't considered adding a Mac Mini ESXi host to my vCenter environment. But I might consider that if two things happened: - Apple starts supporting more RAM in the Mac Mini. - VMware decides to support the Mac Mini hardware officially. Actually, as I get more time and experience using this setup, number 2 may diminish in importance. Time will tell. [William] - Your last reply was quite interesting. You mentioned you have not considered adding your Mac Mini ESXi hosts to vCenter Server? I’m curious to hear why the support of Mac Mini would dictate the ease of centralize management? Is it from a support standpoint that you did not want to do this or additional licenses? [Bryan] - My Dell ESXi servers are managed by vCenter, but I have three of them, which is my license limit currently. If I had a free slot for the Mac Mini I'd certainly use it. But I'd have a hard time justifying the purchase of *additional* licenses for vCenter to bring machines under management that aren't even officially supported. But yeah - if I had the licenses I'd have no hesitation in managing them via vCenter. I get around it currently thru the use of shared data stores. [William] - Bryan, it was really great chatting with you and thanks again for sharing your experiences on how you have leveraged VMware and Apple Mac Mini’s in your production environment. Here is the last question that I have asked all my past interviewees, do you have any final advice or words of wisdom for someone looking to embark on a similar journey? Any particular resources you would recommend for someone to start with? [Bryan] - Well, virtuallyGhetto is THE place to go for those resources. Without your pages I would not have made the leap. My advice would be: Don't expect it to do what a $5,000 investment will do. If you plan to run mission-critical apps or host production data, KNOW what your backup and recovery process looks like and TEST it. If you understand you're striking out somewhat on your own, and you don't mind being a pioneer for the fun of it, do your due diligence, and if it seems like a fit, enjoy it! It's honestly fun to show people my rack. "That's our server. It's running ESXi 5.5." "...Really?!?"
https://www.virtuallyghetto.com/tag/osx/page/3
CC-MAIN-2021-04
refinedweb
1,850
60.85
[fixed subject] Romain Guillebert, 12.08.2011 03:19: > I tried to compiled Demos/primes.pyx using the ctypes backend and I > think I've found a bug : > The Entry for the kmax parameter does not set is_arg to 1. However if I > turn the def into a cdef, it works fine. > > Is this a bug or a known hack ? Vitja already brought this up, too: A quick grep on the sources gives me this: """ Cython/Compiler/Buffer.py: if entry.is_arg: Cython/Compiler/Buffer.py: if entry.is_arg: Cython/Compiler/ExprNodes.py: return entry and (entry.is_local or entry.is_arg) and not entry.in_closure Cython/Compiler/FlowControl.py: return (entry.is_local or entry.is_pyclass_attr or entry.is_arg or Cython/Compiler/FlowControl.py: self.is_arg = False Cython/Compiler/FlowControl.py: self.is_arg = True Cython/Compiler/FlowControl.py: if assmt.is_arg: Cython/Compiler/FlowControl.py: # TODO: starred args entries are not marked with is_arg flag Cython/Compiler/FlowControl.py: if assmt.is_arg: Cython/Compiler/FlowControl.py: is_arg = True Cython/Compiler/FlowControl.py: is_arg = False Cython/Compiler/FlowControl.py: if is_arg: Cython/Compiler/Symtab.py: # is_arg boolean Is the arg of a method Cython/Compiler/Symtab.py: is_arg = 0 Cython/Compiler/Symtab.py: entry.is_arg = 1 """ This doesn't look like it would be wrong (or even just unsafe) to consider the unset flag a bug and fix it. Basically, it's almost unused in the original sources, all places where the flag is being read currently are somewhat recent code that looks reasonable to me and that appear to assume that the flag is actually set. So, I'd say, if anyone wants to properly clean this up, please go ahead and do so, but please do it right on the master branch and send it through Jenkins. Stefan
https://mail.python.org/pipermail/cython-devel/2011-August/001229.html
CC-MAIN-2014-10
refinedweb
298
54.59
hi community :) i'm absolutely new to Unity3d and c# as well. currently i'm trying to create a little top down view game and run into one big issue, which i cannot solve by myself. first i tried to move around a character (sprite) with WASD. succeeded. then i added some kind of a "look at mouse" script. that's working, too. now, i want my camera to follow the character. fails! i thought, i could easily make the mainCamera as a child of the sprite in the hierachy, but that causes my sprite to rotate around its z-axis constantly. my guess is, that this results from the way, how i calculate the new rotation of the sprite. using UnityEngine; using System.Collections; public class LookAtMouse : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { Vector3 mousePos = Camera.main.ScreenToWorldPoint(Input.mousePosition); transform.rotation = Quaternion.LookRotation(Vector3.forward, mousePos - transform.position); Debug.Log ( Input.mousePosition + " vs " + mousePos); } } yes, that's where i'm stuck now. i think, there's a way to solve it, if i could only figure out how. maybe you could calculate the mousepointer's position (in the world) using another way? Thanks! I'm baffled by your question. There is no reference to the character in this code, so how could it follow a character? ScreenToWorldPoing() is used incorrectly for a perspective camera and will only return the position of the camera. There's no code here that modifies the transform.position, so the camera will not move. The rotation code always looks towards the world 'z' I think you need to explain further exactly what you are trying to do, and how you have things setup. the code above is for the rotation of the sprite. it's attached to the sprite. the camera should not rotate or do anything else. it should just follow the sprite, looking down on it, but it starts rotating as soon as i made it a child of the sprite. Answer by froYo · Sep 03, 2014 at 11:35 AM Create a new script and add this. public Transform playerTransform; void Update () { transform.position = new Vector3(playerTransform.position.x, playerTransform.y, transform.position.z); } Drag this script onto your camera and drag the player into the playerTransform variable in the editor. For smoother movement, consider using: Vector3.Lerp Hope this helps. That solved my problem! Thank you very much. I really need to learn how to think. Mouse Click + Raycast + Colliders 2 Answers Best way of accurately getting mouse position in world space on same plane as player? 1 Answer Unity 2D Shoot towards mouse position 1 Answer Restricting the mouse cursor from entering a certain area 0 Answers Smoothly Jump to mouse position 1 Answer
https://answers.unity.com/questions/783871/alternative-to-cameramainscreentoworldpoint.html
CC-MAIN-2020-29
refinedweb
465
67.45
A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics Register / Login JavaRanch » Java Forums » Java » Web Services Author question about REST simon tiberius Greenhorn Joined: Oct 30, 2012 Posts: 29 posted Nov 07, 2012 19:43:18 0 I'm trying to implement a RESTful java web services, and I have some question: I'm making an online shopping web services with following spec (this is just the basics): 1. Customer : CustomerID, Name, Address, PhoneNumber 2. Order (by Customer) : OrderID, CustomerID, CreationDate, GrandTotal (from Parts) 3. Part (by Order) : OrderID, PartID, UnitCost, Quantity, LineTotal ( Unit * Quantity) Now, think specifically about these User Stories related to the Orders (not Customers, not Parts): 1. Customer lists orders to know which orders she has placed 2. Customer creates a new order to be fulfilled. 3. Customer views an existing order to get a reminder of what was ordered 4. Customer updates an order (add or delete parts, or changes quantity) when she realizes it is incorrect. 5. Customer can delete an order that is no longer needed (let’s not worry about if the order has been processed now) Some questions: 1. REST is just a sec right? not implementation. so REST is just like a JSR. 2. is Order a resource? (i'm still trying to understand the terminologies in REST). 3. in my understanding, Order is something the user create, while a resource is anything (data) owned by the server. so the user must first create an order (by querying and choosing already existing resources like parts from the server) and then post the created order to server. after the order is created, then it will become a resource and can be accessed with URI like GET orders/{customerID}. is this correct? 4. I read tutorial about defining interface with XML schema, but the tutorial left out the implementation. I don;t really know how XML schema sits in RESTful architecture. I usally only use model classes annotated with JAXB annotations for XML data transport. can anybody explain a bit about this? 5. For security, I also want a security in low-level(database level). how do you normally provide security using stored proc to prevent unauthorized access? is it also recommended to use views to get data instead of creating the query string dynamically ? I'm using mysql 6. Because Order is concerned with money, I need to know security risks in this scenario. Can anyone show what could go wrong with placing an order? may I hear the security risks in placing order online like this app and how to mitigate them? thank you very much. sandeeprajsingh tandon Ranch Hand Joined: Mar 06, 2009 Posts: 66 posted Nov 08, 2012 08:47:17 0 1- REST is just a sec right? not implementation. - I would say yes, its a spec, its an architecture. Thats why they say services are "RESTFUL". Any service that has those characteristics is said to comply with being REST 2 and 3- is Order a resource -When you say order is created, i am assuming that there would be a service like createOrder and there you would passin a Value Object with order details and create an order. So the class "OrderService" which would service all the order related services createOrder, deleteOrder would in my view be termed as a resource 4- I dont understand the question here but i know that restful webservice can produce xml without xsd. i have been doing the same way as you . are you saying that there could be a schema around that? There can be but how is it better than your annotated model classes? 5- Views are recommended but thats a give and take. If you feel that volume is nt that big enough, you could go ahead with a dynamic query string. Otherwise, views are recommended. I am using dynamic query strings all the time 6- Security is an overloaded term here, are you referring to preventing hacking or using restful protocol based security?. Restful services are secured enough with https. simon tiberius Greenhorn Joined: Oct 30, 2012 Posts: 29 posted Nov 08, 2012 09:21:13 0 thank you very much for the reply. regarding your answers: 1. your answer for my question 2&3 about whether Order is a resource or not. You're saying that Order is not the resource? but order service related are the resources (create order, delete order, update order and get order)? 2. I'm reading some book about rest and it mentioned about xsd, so I want to know what's the role of xsd here 3. security maybe is too broad. so let's focus on issues revolving "identity theft". so customer A puts the order, set the shipping address to A's house, but send the billing to B's credit card. something like that. sandeeprajsingh tandon Ranch Hand Joined: Mar 06, 2009 Posts: 66 posted Nov 08, 2012 11:50:27 0 No problem, you are welcome! 1- yes, the Class OrderService would be a resource 2- in my opinion xsd per sey is about enforcing contracts on xml it is not DIRECTLY related to rest. in my project we give out xmls from the services without xsds. Let me know if you come across a good example which mentions otherwise. 3-But wont this part of your application design , db design , entity model already?. Restful services are stateless(i think), so it has to depend on what you pass to them and the way you design them. If you pass a valid customer and can link his address and credit card details, through an id of sorts you should be fine. Dont try and have a class level variable in the OrderService class(it would be like giving a state to the restful service. A User Authentication object would be a little dangerous to keep it as a class level variable ). You can try out a small sample code to test it out. I am not sure if this is what your question was though. simon tiberius Greenhorn Joined: Oct 30, 2012 Posts: 29 posted Nov 08, 2012 20:40:17 0 okay, usually I use the naming like this for resource classes : XYZResource like so: public class OrdersResource { @POST public void create() { } @GET public Order get() { } @PUT public void update() { } @DELETE public void delete(){ } } so which one is the resource? the class OrdersResource (only one resource) or the 4 methods inside the class (there are 4 resources). because right now, the URI for orders looks like this: all the URI starts with /order create : /order read : /order/{order-id} update : /order/{order-id} delete : /order/{order-id} also, is it RESTful to include userID in the request header? what's the best practice to check whether a request is authenticated or not? thanks sandeeprajsingh tandon Ranch Hand Joined: Mar 06, 2009 Posts: 66 posted Nov 09, 2012 09:32:50 0 Hey, I think i was wrong, referring to this link and other links, WS-Resource defines a WS-Resource as the composition of a resource and a Web service through which the resource can be accessed. So it has to be WHATEVER you are trying to access through the service, So your ORDER, CUSTOMER( i.e. all the nouns) are resources. Extremely sorry about that. There is nothing UNRESTFUL about keep the userID in the request header. REST is supposed to be STATELESS so Do not maintain session state(Authentication state) on the server side, each request should be authenticated seperately that is if you plan to do it via RestFul Service. I was reading this and this can be used. In our project we use Oracle Single Sign On so cant provide much input there. I agree. Here's the link: subject: question about REST Similar Threads REST vs SOAP - stateful/stateless Any idea what's going on? NoSuchObjectLocalException POST/PUT using CXF REST web service Sending HTTP PUT Body Sample Questions for 288 - Need answers All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/597308/Web-Services/java/REST
CC-MAIN-2013-20
refinedweb
1,357
71.34
.) The Build Process From a user-facing standpoint, Pages and other custom controls are really a trio of user-editable files. For example, the definition of the class MainPage is comprised of three files: MainPage.xaml, MainPage.xaml.h, and MainPage.xaml.cpp. Both mainpage.xaml and mainpage.xaml.h contribute to the actual definition of the MainPage class, while MainPage.xaml.cpp provides the method implementations for those methods defined in MainPage.xaml.h. However, how this actually works in practice is far more complex. This drawing is very complex, so please bear with me while I break it down into its constituent pieces. Every box in the diagram represents a file. The light-blue files on the left side of the diagram are the files which the user edits. These are the only files that typically show up in the Solution Explorer. I’ll speak specifically about MainPage.xaml and its associated files, but this same process occurs for all xaml/h/cpp trios in the project. The first step in the build is XAML compilation, which will actually occur in several steps. First, the user-edited MainPage.xaml file is processed to generate MainPage.g.h. This file is special in that it is processed at design-time (that is, you do not need to invoke a build in order to have this file be updated). The reason for this is that edits you make to MainPage.xaml can change the contents of the MainPage class, and you want those changes to be reflected in your Intellisense without requiring a rebuild. Except for this step, all of the other steps only occur when a user invokes a Build. Partial Classes You may note that the build process introduces a problem: the class MainPage actually has two definitions, one that comes from MainPage.g.h: partial ref class MainPage : public ::Windows::UI::Xaml::Controls::Page, public ::Windows::UI::Xaml::Markup::IComponentConnector { public: void InitializeComponent(); virtual void Connect(int connectionId, ::Platform::Object^ target); private: bool _contentLoaded; }; And one that comes from MainPage.xaml.h: public ref class MainPage sealed { public: protected: virtual void OnNavigatedTo(Windows::UI::Xaml::Navigation::NavigationEventArgs^ e) override; }; This issue is reconciled via a new language extension: Partial Classes. The compiler parsing of partial classes is actually fairly straightforward. First, all partial definitions for a class must be within one translation unit. Second, all class definitions must be marked with the keyword partial except for the very last definition (sometimes referred to as the ‘final’ definition). During parsing, the partial definitions are deferred by the compiler until the final definition is seen, at which point all of the partial definitions (along with the final definition) are combined together and parsed as one definition. This feature is what enables both the XAML-compiler-generated file MainPage.g.h and the user-editable file MainPage.xaml.h to contribute to the definition of the MainPage class. Compilation For compilation, MainPage.g.h is included in MainPage.xaml.h, which is further included in MainPage.xaml.cpp. These files are compiled by the C++ compiler to produce MainPage.obj. (This compilation is represented by the red lines in the above diagram.) MainPage.obj, along with the other obj files that are available at this stage are passed through the linker with the switch /WINMD:ONLY to generate the Windows Metadata (WinMD) file for the project. This process is denoted in the diagram by the orange line. At this stage we are not linking the final executable, only producing the WinMD file, because MainPage.obj still contains some unresolved externals for the MainPage class, namely any functions which are defined in MainPage.g.h (typically the InitializeComponent and Connect functions). These definitions were generated by the XAML compiler and placed into MainPage.g.hpp, which will be compiled at a later stage. MainPage.g.hpp, along with the *.g.hpp files for the other XAML files in the project, will be included in a file called XamlTypeInfo.g.cpp. This is for build performance optimization: these various .hpp files do not need to be compiled separately but can be built as one translation unit along with XamlTypeInfo.g.cpp, reducing the number of compiler invocations required to build the project. Data Binding and XamlTypeInfo Data binding is a key feature of XAML architecture, and enables advanced design patterns such as MVVM. C++ fully supports data binding; however, in order for the XAML architecture to perform data binding, it needs to be able to take the string representation of a field (such as “FullName”) and turn that into a property getter call against an object. In the managed world, this can be accomplished with reflection, but native C++ does not have a built-in reflection model. Instead, the XAML compiler (which is itself a .NET application) loads the WinMD file for the project, reflects upon it, and generates C++ source that ends up in the XamlTypeInfo.g.cpp file. It will generate the necessary data binding source for any public class marked with the Bindable attribute. It may be instructive to look at the definition of a data-bindable class and see what source is generated that enables the data binding to succeed. Here is a simple bindable class definition: [Windows::UI::Xaml::Data::Bindable] public ref class SampleBindableClass sealed { public: property Platform::String^ FullName; }; When this is compiled, as the class definition is public, it will end up in the WinMD file as seen here: This WinMD is processed by the XAML compiler and adds source to two important functions within XamlTypeInfo.g.cpp: CreateXamlType and CreateXamlMember. The source added to CreateXamlType generates basic type information for the SampleBindableClass type, provides an Activator (a function that can create an instance of the class) and enumerates the members of the class: if (typeName == L"BlogDemoApp.SampleBindableClass") { XamlUserType^ userType = ref new XamlUserType(this, typeName, GetXamlTypeByName(L"Object")); userType->KindOfType = ::Windows::UI::Xaml::Interop::TypeKind::Custom; userType->Activator = []() -> Platform::Object^ { return ref new ::BlogDemoApp::SampleBindableClass(); }; userType->AddMemberName(L"FullName"); userType->SetIsBindable(); return userType; } Note how a lambda is used to adapt the call to ref new (which will return a SampleBindableClass^) into the Activator function (which always returns an Object^). class. XamlMember stores two function pointers: Getter and Setter. These function pointers are defined against the base type Object^ (which all WinRT and fundamental types can convert to/from). A XamlUserType stores a map<String^, XamlUserType^>; when data binding requires a getter or setter to be called, the appropriate XamlUserType can be found in the map and its associated Getter or Setter function pointer can be invoked. The source added to CreateXamlMember initializes these Getter and Setter function pointers for each property. These function pointers always have a parameter of type Object^ (the instance of the class to get from or set to) and either a return parameter of type Object^ (in the case of a getter) or have a second parameter of type Object^ (for setters). if (longMemberName == L"BlogDemoApp.SampleBindableClass.FullName") { XamlMember^ xamlMember = ref new XamlMember(this, L"FullName", L"String"); xamlMember->Getter = [](Object^ instance) -> Object^ { auto that = (::BlogDemoApp::SampleBindableClass^)instance; return that->FullName; }; xamlMember->Setter = [](Object^ instance, Object^ value) -> void { auto that = (::BlogDemoApp::SampleBindableClass^)instance; that->FullName = (::Platform::String^)value; }; return xamlMember; } The two lambdas defined use the lambda ‘decay to pointer’ functionality to bind to Getter and Setter methods. These function pointers can then be called by the data binding infrastructure, passing in an object instance, in order to set or get a property based on only its name. Within the lambdas, the generated code adds the proper type casts in order to marshal to/from the actual types. Final Linking and Final Thoughts After compiling the xamltypeinfo.g.cpp file into xamltypeinfo.g.obj, we can then link this object file along with the other object files to generate the final executable for the program. This executable, along with the winmd file previously generated, and your xaml files, are packaged up into the app package that makes up your Windows Store Application. A note: the Bindable attribute described in this post is one way to enable data binding in WinRT, but it is not the only way. Data binding can also be enabled on a class by implementing either the ICustomPropertyProvider interface or IMap<String^,Object^>. These other implementations would be useful if the Bindable attribute cannot be used, particularly if you want a non-public class to be data-bindable. For additional info, I recommend looking at this walkthrough, which will guide you through building a fully-featured Windows Store Application in C++/XAML from the ground up. The Microsoft Patterns and Practices team has also developed a large application which demonstrates some best practices when developing Windows Store Applications in C++: project Hilo. The sources and documentation for this project can be found at. . Join the conversationAdd Comment. And I just wonder when people will start REALLY see and call things by their real names? What extension? Extension is supposed to extend. What MS did is MUTATION. And you know what? I have nothing against people trying to do new stuff. But for the pit sake don't piss on my neck and tell me that's raining! What has C++/CX to do with C++? NO multiple inheritance, hats instead of pointers, this people, is MUTATION not extension smallmountain0705 or is it small_mountain_0705 or any doppleganger, Who do you believe, the guy trying to pin you to shove a touchy-feely thing down your throat, or some guy you never heard of but who makes perfect sense out of things that are really to stupid to be believed in the first place? On this one, I'm going to believe Sinofsky until there is evidence that Microsoft has changed their minds. After lengthy, in-depth investigative work, our crack-smoking team, freshly picked from the smokiest dorm room here, came up with this blogs.msdn.com/…/listing-your-desktop-app-in-the-store.aspx Beware the confusing word, metro, used throughout. Read it all before coming back, if you must – if your dare. Back? Regarding this online store, describe what you think you gain, what you lose, or if neither of those apply to you. Follow that with your expectation on the life of this store. Will it exist for more than one year? More that two? Does it matter? How to differentiate your app(s) from the other there? Adveristing makes no sense; the zero-cost torch app is right nexc to your 10-man-year app. And the totch app is FREE! and you expect someone to pay for yours? Are you nuts! It is a race to the bottom. Been there. Seen that. Departed. Yes, I think you are right – that there is a way to get desktop apps linked-to your store, from MS's new, online store. I suppose it's better to be listed there, even if amongh a million others, than not to. @Krzysztof Kawa It isn't really crippling, since WinRT is a bunch of COM classes at its core. So all these other compilers have to be able to do is access COM (and after it being used for a long long long time, I would be surprised if other compilers weren't COM aware). MS once sold VC by the 15 lb box for $99 US, and free to many (it would arrive at the door). This drove all the other windows' dev tools out. Now MS sells something similar for $15,000 US. Hahhaahha hehheheh hahahahaa hooohoo! @Crescens2k You miss the point. gcc and others obviously do work with COM but that is not the issue. They can NOT compile C++/CX code (falsely advertised here as C++) without some nasty preprocessor tricks to get rid of all the "partial", "ref" etc. nonsense sprinkled all over the place. I'm talking about plain syntax not the higher level abstraction the COM is. I know there is WRL to use ISO C++ but honestly who would be insane enough to torture himself with its syntax in a project that is more than few source files long?
https://blogs.msdn.microsoft.com/vcblog/2012/08/24/connecting-c-and-xaml/
CC-MAIN-2016-07
refinedweb
2,031
54.93
In the personal project for the Censor Dispenser (), the exercise asks if your censor function can handle punctuation elements. After doing the initial string split on spaces, I thought of iterating through each element in the resulting list, to look for exclamation points, then commas then periods. The problem is that if I iterate and use split again when it finds these things, it then creates a nested list, whereas I’d need those strings to simply remain in the parent list for my function to work. I have tried the following (at first just for exclamation marks, to see if I get the concept right): def heavy_censor(message): message2 = message.split(" ") message3 = [] for word in message2: index = message2.index(word) if word.find("!") != -1 and len(message2[index]) > 1: message3 = message2[index].split("!") message2[index : index] = message3 message2 = message2.remove(word) But it returns an AttributeError that NoneType object has no value and I have no idea what this means as I’ve never seen this error before. Other attempts to iterate, splitting and adding to parent list have always just ended in infinite loops. I’ve looked through python documentation and googled but I couldn’t find another method of splitting a string within a list without creating a nested list at the newly split index.
https://discuss.codecademy.com/t/how-to-split-punctuation-from-a-string-list-without-creating-a-nested-list/429499
CC-MAIN-2019-43
refinedweb
218
61.26
iEventOutlet Struct Reference [Event handling] The iEventOutlet is the interface to an object that is provided by an event queue to every event plug when it registers itself. More... #include <iutil/event.h> Detailed Description The iEventOutlet is the interface to an object that is provided by an event queue to every event plug when it registers itself. Any event plug will interact with event outlet to put events into system queue and so on. The event queue is responsible for detecting potentially conflicting situations when several event plugs may generate a event from same original event (e.g. a key press will cause several keydown events coming from several event source plugins). In this case the event sources are queried for how strong their "wish" to generate certain types of events is, and the one with the strongest "wish" wins. Then the respective outlet is set up such that any unwanted events coming from 'disqualified' event plugs are discarded. Definition at line 443 of file event.h. Member Function Documentation Put a broadcast event into event queue. This is a generalized way to put any broadcast event into the system event queue. Command code may be used to tell user application that application's focused state has changed (e.g. csevFocusGained), that a graphics context has been resized (e.g. csevCanvasResize), that it has been closed (e.g. csevCanvasClose), to finish the application immediately (e.g. csevQuit) and so on. Implemented in csEventOutlet. Create a event object on behalf of the event queue. A general function for generating virtually any type of event. Since all events for a particular event queue should be created from the same heap, you should generate first a event object (through CreateEvent method) then you fill it whatever you like and finally you insert it into the event queue with the Post() method. Implemented in csEventOutlet. This is a special routine which is called for example when the application is going to be suspended (suspended means "frozen", that is, application is forced to not run for some time). This happens for example when user switches away from a full-screen application on any OS with MGL canvas driver, or when it presses <Pause> with the OS/2 DIVE driver, or in any other drivers that supports forced pausing of applications. This generates a `normal' broadcast event with given command code; the crucial difference is that the event is being delivered to all clients *immediately*. The reason is that the application is frozen right after returning from this routine thus it will get the next chance to process any events only after it will be resumed (which is kind of too late to process this kind of events). Implemented in csEventOutlet. Put a joystick event into event queue. iNumber is joystick number (from 0 to CS_MAX_JOYSTICK_COUNT-1). If iButton == 0, this is a joystick move event and iDown is ignored. numAxes can be from 1 to CS_MAX_JOYSTICK_AXES. Otherwise an joystick up/down event is generated. iButton can be from 1 to CS_MAX_JOYSTICK_BUTTONS (or 0 for a joystick move event). Implemented in csEventOutlet. Put a keyboard event into event queue. Note that codeRaw is the key code, either the alphanumeric symbol that is emmited by the given key when no shift keys/modes are active (e.g. 'a', 'b', '.', '/' and so on) or one of CSKEY_XXX values (with value above 255) and the codeCooked parameter is the translated key, after applying all modeshift keys. If you pass 0 as codeCooked, a synthesized value is created based upon codeRaw using an simple internal translation table that takes care of Control/Shift/Alt for English characters. However, in general, it is best if the entity posting the event can provide both codes. Implemented in csEventOutlet. Put a mouse event into event queue. If iButton == 0, this is a mouse motion event, and iDown argument is ignored. Otherwise an mousedown or mouseup event is generated at respective location. iButton can be in the range from 1 to CS_MAX_MOUSE_BUTTONS (or 0 for mouse move event). Implemented in csEventOutlet. Put a previously created event into system event queue. - Remarks: - The event you pass to this method should be heap-allocated rather than stack-allocated since the event will be queued for later dispatch and because receivers of the event may claim their own references to it. The typical way to create a heap-allocated event is with iEventQueue::CreateEvent(), iEventOutlet::CreateEvent(), or via the C++ 'new' operator. The CreateEvent() methods have the benefit that they pool "dead" events and re-issue them to you when needed, thus they are quite efficient. Implemented in csEventOutlet. The documentation for this struct was generated from the following file: Generated for Crystal Space 2.0 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/api-2.0/structiEventOutlet.html
CC-MAIN-2016-22
refinedweb
795
64.3
ASP.NET # MVC # 14 – How Asp.net MVC controller works (Internally in dot net framework) or how controller class is implemented?? Hi Geeks, As we have seen in the post Controllers in detail that – Controller is the most important part in MVC architecture which works as a mediator in between views and models. I have made one simple class diagram which will explain how normal class is converted into the controller or how controller works implicitly in dot net framework. I For a class to be a controller ,at least it should implement the Controller interface as shown in the diagram in 1) red box and by convention the name of the class must end with the suffix Controller. public interface IController { void Execute(RequestContext requestContext); } EX. Simple Example for implementation of IController interface using System.Web.Mvc; using System.Web.Routing; public class SimpleController : IController { public void Execute(RequestContext requestContext) { var response = requestContext.HttpContext.Response; response.Write(“<h1>Hello World!</h1>”); } } It’s a simple process really: When a request comes in, the Routing system identifies a Controller, and it calls the Execute method When we put the name of the controller in the url like http:localhost//Simple ,We will have output as follows NOTE : Implementing IController is pretty easy, as you’ve seen, but really all it’s doing is providing a facility for Routing to find your Controller and call Execute. This is the most basic hook into the system that you could ask for, but overall it provides little value to the Controller you’re writing. II AS we are developers we always love to work with rich API(application programming interface).As we can see in the 2) red box a class can also inherit Controller class to be a controller . This Controller class is still very lightweight and allows developers to provide extremely customized implementations for their own Controllers, while benefiting from the action filter infrastructure in ASP.NET MVC Controller class implements ControllerBase class which implements IController. ControllerBase class is an abstract base class that layers a bit more API surface on top of the IController interface. It provides the TempData and ViewData properties. Controller class also implements interfaces like IActionFilter, IAuthorizationFilter, IDisposable, IExceptionFilter, IResultFilter These interfaces are useful in many other functionality like Exception Handling , Authorization . NOTE: As we saw two ways to use class as a controller but The standard approach to writing a Controller is to have it inherit from the System.Web .Mvc.Controller abstract base class, which implements the ControllerBase base class. The Controller class is intended to serve as the base class for all Controllers, as it provides a lot of nice behaviours to Controllers that derive from it. I hope this will remove all the queries related to controller from your mind. Thank You.
https://microsoftmentalist.wordpress.com/2011/09/14/asp-net-mvc-14-how-asp-net-mvc-controller-works-internally-in-dot-net-framework-or-how-controller-class-is-implemented/
CC-MAIN-2018-09
refinedweb
469
50.97
Not logged in nl_langinfo(CODESET on my systems (various linux with LANG=de[_DE[@euro]]) this gives ANSI_X3.4-1968 (aka US-ASCII) and this is no known encoding. So browsers like mozilla that honour these setting switch the page encoding to US-ASCII what is not appropriate for e.g. commit comments with umlauts ... don't know why nl_langinfo() gives this result :-( Suggestion: Make the encoding a configurable project variable. So that someone with a hebrew locale setting can have an english project with properly displayed comments. unfortunately, this doesn't work for me. I am not able to get any other encoding from nl_langinfo than ANSI_X3.4-1968. Here is the small program I have tested with: #include <stdio.h> #include <langinfo.h> int main (char *argv[], int argc) { printf ("nl_langinfo(CODESET) returns: %s\n", nl_langinfo(CODESET)); return 0; } And though 'locale -m' shows "ISO-8859-1" as available locale LC_ALL=de_DE.ISO-8859-1 langinfo shows the well known "ANSI_X3.4-1968" :-((( It would be nice to have the option of having the changes to commit logs in cvstrac propagate into the repository, using this command. e.g. oops scenario - check in a bunch of files including one or more that you did not intend to checkin, undo that, later checkin a change to one of the files whose version was removed cvs ci -m"checkin bunch of stuff" # myfile.c version is now 1.14 cvs admin -o1.14: myfile.c # oops! remove myfile.c latest version: cvs ci -m"file update" # later check in a change to that file # myfile.c version is now 1.14 We see this in cvstrac: 1. on the cvstrac browse file page the comment for the version (myfile.c 1.14) is the OLD comment. 2. on the timeline and chngview the latest commit on that file is not seen. (if the file is checked in by itself) 3. on the chngview of the old commit the file version (that was removed for that checkin) is seen and diff is for the new commit Rather than using "cvs admin" to back out bogus changes (which doesn't exactly sound like a Best Practice), I'd recommend checking in a merge from the last known good revision as documented at cvs update -j 1.14 -j 1.13 myfile.c cvs commit -m "backing out changes" myfile.c Besides working correctly with CVSTrac, this keeps a record that someone on the development team screwed up a commit. Thank you for the pointer to the proper way of reverting changes. :) A cvstrac setup reconstruct database will get all cvs file versions and comments back correctly in the database but of course may well break links between tickets and checkins. Subsequent commits to files which were "cvs admin -o"ed this way do show up correctly The -o operator doesn't really remove the revision from the CVS repository (i.e. the CVS history), only from the specific RCS file. Once the revision shows back up in the file the revision numbers sync up again and the "hole" is plugged. A cvstrac setup reconstruct database will get all cvs file versions and comments back correctly in the database but of course may well break links between tickets and checkins. I expect that if you did the -o before CVSTrac picked up the change then things would be okay. CVSTrac doesn't pick up repository changes until someone tries to access it. ... a button the setup user could mash, periodically, that went through every file in the sqlite database got the message for the most recent version and ran the cvs admin command to fix the comma v file. Such a button would be useful for some sites. Other sites, on the other hand, might prefer a button which pulls the latest messages from CVS and copies them to the database. Other sites might only want to see the differences between the database and the repository. Other sites might want to completely disable the ability to change commit messages in CVS and/or CVSTrac. It really comes down to how people use CVS and CVSTrac. However, such a button is not going to happen. The idea of a single button which does large-scale non-revertable copies from the CVSTrac database to the CVS repository... No. I'd like to think CVSTrac has a decent history of not trashing peoples "crown jewels", and it'd be best to keep it that way. If someone really wants this, we do offer external tools. As for going the other way, it just doesn't make sense for users the "correct" there comments in a user friendly way (again, CVSTrac is a friendly way to interact with cvs, plus more . . . I don't know why you'd want people to have to use tons of other tools that overlap in functionality just) . . . and then find that their changelog in their (say) automated build system still has the old comments. Without this change, one can QUICKLY find their files associated with change sets using CVSTrac (again, the point) . . . but then still have to meander around looking for the files in their (say) wincvs tool and retype the correction. Since CVSTrac sucks out the comments on project initialization (in the setup), it's obvious that the intent is for the comments to stay in sync (not to diverge). If folks don't want CVSTrac to mess with their "Crown Jewels", (a) they don't have to mash the button (as a 'setup' user), or (b) they don't have to even use CVSTrac. The latter is the case if they don't TRUST the tool. Oh, and guess what . . . they are less likely to trust the tool if "Edit comment" doesn't actually edit the comment. EVERY person I've EVER shown this tool to . . . all the way from the beginning . . . loses a little faith in that command once they find out that there's no integration in that command. Again, they call it a flaky tool, not a design decision. I make fun of ClearQuest for its poor integration with actual code. I call it a lie when folks who use ClearQuest say that they have "integration". I tell them to look at CVSTrac. . . . that it's way better. It's still better, but the mentality I'm seeing in that argument seriously worries me. I'd like to hear from the author on his thoughts. The founder of this product was a genius. I can't argue with that. Since CVSTrac sucks out the comments on project initialization (in the setup), it's obvious that the intent is for the comments to stay in sync (not to diverge). I'd have a hard time agreeing with that assumption. For one thing, evidence is against you. Checkin message editting has been in CVSTrac since [1] and synchronization was never addressed (in either direction). The nearest thing to synchronization is my message sync hack using external tools. I'm not against checkin synchronization (I'm not just the author of cvstrac_chngsync.pl, I'm also an enthusiastic user). I am not in favour of doing in on a large-scale, nor am I keen on doing it "silently". If it came down to having a "[ ] write message change to CVS repository" flag in the chngedit form... I could live with that. chngedit I'd like to hear from the author on his thoughts. I don't think he's exactly hiding his thoughts on SCM integration with CVSTrac-like tools. I suppose I could upload some screenshots or something of what I use at work. void common_about(void){ static const char *azLink[] = { "index", "Home", 0 }; login_check_credentials(); common_header("About This Server", azLink); this should be changed to: void common_about(void){ login_check_credentials(); common_add_menu_item("index", "Home"); <-- added common_header("About This Server", 0); <-- changed in the original version, the "Home" link is not displayed in the header of the page. nees to be changed to: common_header("About This Server", 0); invalid reference to removed variable 'azLink' <html> <head> <title>%N: %T</title> <link rel="stylesheet" type="text/css" href="/~progers/cvstrac/diff2html.css"> </head> The classes specified in the html don't have great names (they use generic style names like 'box1' rather than 'menuBar' that would be easier to customize) and not all elements have a class, but it is possible to control the document with some tweaking. I've attached a copy of my stylesheet for reference. It would be cool if cvstrac had more descriptive class names and, and if it hosted it's own internal style sheet (but this could be perceived as needless embellishment since you can supply an external one). drh - I'm a happy lackey to do this, thugh I'm not a competent c hacker, I could modify the html and create default css. email root@turingstudio.com. SYMPTOM There is only 1 multi line field in the screen 'Create a new Ticket' SOLUTION In order to keep descriptions simple I'd like to have a set of different multi line fields: - Symptom - (Requested)Solution - Steps to Reproduce COMMENT Many people who report an issue make it 1 long story. Using separate fields for each part will force them to clearly state their point. The single line possibilities in 'User-Defined Fields' are not an option. I like the software, and I like the idea of writing native CGI applications. So, I started digging in the cvstrac sources to get inspiration. I understand individual lines, but I fail to understand the big picture. What is the guiding principle behind the subdivision in separate files. Usually, you find simple guiding principles, such as: one source file corresponds to one web page (typical in php, asp, asp.net, ...) or corresponds to one abstract data type (such as one table and its related methods, or one memory struct and its related methods, ...), or is otherwise organised around another particular guiding principle, which tremendously facilitates locating particular logic. It turns a source tree into an addressable grid, where you can almost calculate where some logic must be located, and where you can make changes, if needed. What is the guiding principle underlying cvstrac? /* ** WEBPAGE: /example */ Both of the above tricks are accomplished by preprocessing the source code before it is handed to the C compiler. Se agrego una linea a la descripcion The thing is that CVStrac's history processing ignores module creation, including "import". So if you just do a "cvs import" on the module and then nothing else, the module won't be seen. This isn't the easiest thing to fix. When reading the history file, for every 'O' entry encountered you'd need to do a recursive scan of the directory (and Attic) and extract the initial version info of each encountered file, treating each one as an 'A' or maybe 'M' action. It can be done, but it's nasty. It also assumes that later versions of CVS don't create 'A' entries when importing. svntrac supports only import of users, and unlike cvstrac , it can't update svnserve users file based on the contents of svntrac db. This limitation is imposed by the fact that passwords are stored encrypted in svntrac db, while they are stored as plain text in svnserve users file. Only way, AFAICT, to support it would be to store plain text passwords in db, but I guess this goes against cvstrac design goals? Though it would be nice to have this "user export" feature since AFAIK there is no way to manage svnserve users other then to edit users file directly. Only way, AFAICT, to support it would be to store plain text passwords in db, but I guess this goes against cvstrac design goals? Not storing plain text passwords in a database is just common sense. That svnserve is doing otherwise is simply poor design. I could understand if they used a different hash for storage, but to do nothing at all?!? add <userid> <passwd> perms <userid> <CVSTrac perms> passwd <userid> <newpasswd> remove <userid> This would also be convenient for CVS users like myself who use CVS-over-ssh rather than pserver. Would be even better if this could be a drop down and that we could set some marketing version somewhere in the setup I have set up "Mantis" ( as a reasonable substitute, but I have an existing cvstrac database I would very much like to convert to "Mantis" so I don't lose any of my data. My first attempt at conversion was not very successful. Does anyone have any pointers or (hopefully) a script to convert the format? Thank you. I think I saw something on the Trac mailing list about a script to convert from CVSTrac to Trac. How well this works for you, of course, depends on how close the Mantis and Trac database schemas are. I know nothing about Mantis internals, so I can't really say anything else. cvstrac.exe cgi Does someone run cvstrac.exe as a CGI Apache (Windows build) program? I'd suggest visiting a CVSTracNT support forum, if they've still got them. You could also try building CvstracOnWindows from the latest CVSTrac source yourself and see if you still have problems. Date: 2004-Apr-20 17:40:23 (local) 2004-Apr-20 21:40:23 (UTC) [snip] --- LDAP.pm 2004/03/22 17:02:53 1.45 +++ LDAP.pm 2004/04/21 21:40:23 1.45.2.1 Notice the 1 day difference between the date reported by chngview 2004-Apr-20 (UTC) and diff 2004/04/21, same problem shows in timeline, all dates are off by 1. I am pretty sure this is not a bug but something to do with our configuration, but can't figure out what and where to configure this. Thanks for any insight. ---------------------------- revision 1.3 date: 2004/05/11 13:03:14; author: [...]; state: Exp; lines: +14 -13 [...] ---------------------------- in the output of 'cvs log [...]', which is correct, but cvstrac incorrectly displays: 2004-May-10 16:50 1.4 Check-in [647]: [...] (diff) 2004-May-10 14:03 1.3 Check-in [642]: [...] (diff) 2003-Nov-19 17:09 1.2 Check-in [384]: [...] (diff) [...] around and for the change in question for the same file. The first samples above is reporting in UTC and the second is BST (UTC+0100) admittedly, but that only accounts for on hour's difference. The month day field is still off by one, so this does not seem to be timezone related, given the previous commentor's experience. Should be fixed in 1.1.3 and higher. To fix the dates on existing records, the procedure would appear to be: $ cp project.db project.db.bak sqlite> select strftime('%s','2004-03-01') from chng limit 1; 1078099200 $ sqlite project.db sqlite> update chng set date=date+(60*60*24) where date > 1078099200; --Derek I just updated from 1.1.2 to 1.1.4 and applied the correction to our database. The previous version showed a one-off error in the calculation (we had check-ins on Feb. 29th that still were wrong). Actually correcting the date for date > 2004-03-01 gives the right result. -- Edelhard Becker <ebecker@software-manufaktur.de> For example look at this report: Looks like an unititalized variable. It would be nice to have configuration option for output encoding Expected: I understand that "Make Default" could end up saving today's date and making the feature of dubious use for those who (if there are any) don't want to see future dates in the timeline, so I would propose a checkbox next to "Make Default" that says "include date in saved default", checked "off" by default. I think this single change would improve CVSTrac's overall usefulness by about 20%. Seriously. Trying to understand the supplied patch in the attachments it looks like a fixed date will be saved in the config table. Just considering what this means: The administrator could specify what timerange to use and thus which milestones would be visible. That's okay. And if he defines a new milestone he has to decide again if the timerange should be extended to include the new milestone as well. So the workflow would be to specify a new milestone and additionally check if the date is still inside the default timerange for the timeline. A similar two step workflow would be needed if the target date of a milestone is moved to a later date (a quite typical operation in project planning). Of course if the timeline end day is something like 2099-12-31 the second step would not be required within the next 90 years. (On second thought I have to adjust this to 2038-01-19, because the date is stored as seconds since epoch.) As I had the same feature request (but did not search the tickets here) I used a different workaround with a look ahead period that is read from the config file. Currently I simply read the number of days that the timeline should be extended to the future. (Configuration changes need direct access to the sqlite backend database - that's okay for me). The change is in the timeline_page() function after the line: i = atoi(PD("d","0")); if( i>0 ) days = i; begin = end - 3600*24*days; end += (atoi(db_config("timeline_future_days","0")) + 1)*3600*24) - 1; The days that we look back into history are still the range that is defined in the input field of the timeline form: begin + P("d") + timeline_future_days = end In other words this would say "... and include the milestones that are due in the next N days in the timeline." Acutally it would include anything within the next N days in the timeline. But the only thing I can think of that might have a future timestamp are milestones. -- Rolf end += (atoi(db_config("timeline_future_days","0")) + 1)*3600*24 - 1; this not really working as I thought. The modified end date (+N days) is not only used in the query statement but also put in the input field "d" of the timeline form. If the form is resent that field will be evaluated and used as the staring point for the end date ... and another N days will be added. With each request the range is shifted another N days to the future. timeline.c The results are good if the timeline is called from the navigation menu without parameters. 120 days reaching back in history and another 120 days in the future. The problem comes in when you look at the timeline and decide that you need other states of the tickets to be shown or hidden. If you check/uncheck the options and submit the timeline form the modified future date &e=2011-Jan-13 will be send as request parameter and the new range will be: &e=2011-Jan-13 2011-01-13 -120 days (=today) ... 2011-01-13 (or 2011-01-13 +120 ?) And it shows only the changes from today (plus the milestones that are due in the next 120 days -- in my case there are none in this range) Not sure, how this can be handled properly. I had some idea in mind to keep the current date in the "e"nd input field and just silently extend the query to include N more days. The N days would then be a system wide parameter coming from the config table and not something that the normal user can decide. Actually if symetric ranges are okay your solution to get the N value from the "d" request parameter (or from the cookie) is very appealing. Note also the different rendering of the Milestone comment in the red separating box and in the normal timeline content. As long as you do not use more (complex) markup it is still okay though. Or maybe i should try to figure out the other ways to achieve it that you have mentioned... > it also makes it difficult to change settings and then provide the resulting > URL to someone else. Would that be working currently? Most of the settings depend on the cookie? Or am I wrong here? Typically I only exchange URLs to reports or tickets, but so far I never did this for the timeline. > I would be the first one to scream out loud if you'd > remove the input field. Oh, no, that's not what I meant. The problem is that the input field is pre-filled with the present date. Then when you submit a change to the form, that date becomes the end time. The only reason to have that pre-filled value is to provide a "hint" as to the expected format. So no, the input field stays. The pre-filled end date, however, can go. > Most of the settings depend on the cookie? Or am I wrong here? It's both. The settings go into a cookie for later visits, but when you're actively changing things they also end up in the URL. Exchanging timeline URLs isn't something I commonly do either, which is actually part of the problem. Every time I do copy a timeline URL I seem to forget to remove the end time from it... > The only reason to have that pre-filled value is to provide a "hint" as to the expected format. Thanks for this information. It showed me the right direction: I use a boolean variable to check if the end point of the timeline was explicitly specified by the user or if a default is used. The behaviour for a user specified range is the same as before, but for a default time the range is extended into the future and the "e" text input field is left empty. So I have something like the following in the form: timeline of [90 ] days going backwards from today [ ] peeking into future till 2011-Feb-27 or timeline of [90 ] days going backwards from today [2010-Sep-20] The "into future" hint can be visually modified/hidden via CSS (typically a smaller and lighter font would be used for it). On second thought it might be even better to make it a tooltip (title attribute) for the word "today". I still use my "administrator only" approach, because the alternative to have the same amount of days reaching backwards and forward didn't suite me. See the attached patch "ticket-504_3.patch" #ifdef CVSTRAC_WINDOWS /* Microsoft IIS doesn't define REQUEST_URI variable, Assertion failed: zPath[0]==0 || zPath[0]=='/', file main_.c, line 464 URI: /proj.trac/dirview SCRIPT_NAME: /proj.trac PATH_INFO: /dirview The while(zPath && zScript && *zPath && *zScript && *zPath == *zScript) strips the '/' off the front of the path. The assert fails. I think I'll need to clean it up such that there's actually an explicit check for IIS and most of the CVSTRAC_WINDOWS trickery will only happen with IIS. Won't get to that until tonight. zScript = getenv("SCRIPT_NAME"); /* If the PATH_INFO & SCRIPT_NAME are the same, means we got ** IIS wildcard ScriptMap, therefore we allow only one level wilcard mappings. ** Othwerwise we remove common part of zPath and zScript. */ if( zScript ){ to zScript = getenv("SCRIPT_NAME"); /* If the PATH_INFO & SCRIPT_NAME are the same, means we got ** IIS wildcard ScriptMap, therefore we allow only one level wilcard mappings. ** Othwerwise we remove common part of zPath and zScript. */ if( g.isIIS == 1 && zScript ){ and it works for me. -e Remark: Yes, I know how to it from the command line. Yip. Cookies enabled. I just tried it under netscape as well. 1) Login 2) Wiki 3) edit (or was it note, edit?) 4) Now logged in as anon. not Andre Same thing happens with making or editing tickets. ASO - Done. Captured output attached. If this output is not sufficient, please e-mail me: andre@csse.monash.edu.au and thanks for the help drh. :) I could certainly use this sort of thing... It will help if the "enum" table will be extended to allow the assignment of labels to those levels. There is no need to change the actual coding - the 1-5 level are more than enough. The enums should be edited by the 'setup' user, and should be used for ticket entry and editing. (Note that the external diff config doesn't make clear that it only applies to the cvs part and not the wiki part.) (test) I have a similar problem -- I don't see anything at all in the browser since I have just imported a project and all files are at version 1.1. It seems that CVS import does not add the appropriate entries to the history file. Is there a way to kick-start CVSTrac such that it will find the base versions of the project files, e.g., by faking history file `A' entries? --Anselm Lingnau <anselm.lingnau@linupfront.de> The problem doesn't appear when using konqueror. Perhaps this is related to the document encoding information send by the built-in webserver? I found a workaround: in Mozilla change the codepage from US-ASCII to ISO-8859-1 before submitting changes. I ever use cvs import to checkin new source trees. That's why cvstrac didn't update the tables (if this is right). The problem is, using "cvs add" takes time, can only be used with a "find . -print -exec cvs add {} \;" to automate many file checkins, and "cvs add" is even not mentioned to use for cvs with initial checkins of complete source trees. There are two ideas how to "fix" this problem 1. Name the "Browse - Browse the CVS repository tree." menu as "Browse - Browse changed files in the repository tree.", but this is not the way i want to work with cvstrac 2. Find a way to "reconstruct" the filechng (and other influenced tables) as setup-menu or as "realtime-sync-filetree-on-browse" which seems to be useful... Is there another way to fix this? cvs rtag tagname modname doesn't work, did i used the rtag command wrong ?` Bye, Bjoern Bye, Bjoern cvs rls -R -d / My proposal is to make CVStrac layout and coloring fully CSS-based. In this fashion, a report table would have the following tagging: <table class="reportTable" ...> <tr class="reportHeader"><td class="reportHeader ticketNumber">... <tr class="reportRow activeTicket">...<td class="activeTicket ticketSubsys"> ...etc. Note the use of multiple classes to enable full customization. You can, for instance, say: .reportTable { font-size: 8pt; } .reportHeader { font-weight: bold; text-align: center; } .reportRow { padding: 2px; } .ticketNumber { font-face: Courier; } .ticketSubsys { font-style: italic; } .activeTicket { background-color: orange; } This would make the entire table 8pt size, the report headers bold and centered, give each report row extra padding, and give distinct coloring and fonts to both active ticket rows and specific columns. I might find the time to make a patch, but I'd like to have some feedback first as to whether this would be incorporated. RuiCarmo What happens when you specify all the different classes and just don't reference them in the stylesheet? In other words, can we output "good" CSS-compliant HTML and worry around browser stupidity in the stylesheets (or in the future, when IE catches up with standards). Ideally, CVStrac should have a layout like so: <body> <div id="header">$title</div> <div id="navigation">status, search, links as a UL (see "a list apart" for amazing ways to deal with navigation as bulleted lists)</div> <div id="content">timeline and so</div> <div id="footer">$version</div> </body> and a default stylesheet somewhat like: body { font-family: Arial, Helvetica, sans-serif; font-size; 10pt ... } #header { width: 100%; background-color: light-blue; } #content, #navigation { float: left; } #header H1 { font-family: Times; } #navigation li a { color: blue; } ...that would take care of the fixed elements. Then you'd use classes for the ones that may or may not appear, or to tweak visual aspect based on status: .milestone { padding: 2px; border: 1 pixel solid blue; } .open { color: red; } .closed { color: grey; } ...etc. I can try to think about it a bit more later, but this approach would remove practically all FONT and require a lot of changes to the way H1, H2, etc. are used... some things specified in HTML actually cause trouble when attempting to override them via CSS... Ah. So you're really better off stripping down to minimal HTML rather than just tacking class="foo" or id="bar" on to already existing stuff? CVStrac should have a layout like so: Needs an "action" section (see [352]) and maybe a "whoami". Otherwise it captures the page structure quite nicely. see "a list apart" for amazing ways to deal with navigation as bulleted lists Sweet. I won't pretend to understand the CSS, but if they can get that sort of nav bar from just straight <li> tags, I'm up for it. I guess the question to ask is what's the best approach? Obviously there's a need for infrastructure (i.e. a way to setup the CSS and download it), and the overall page layout needs attention (<DIV>, etc). Is this something where we can stop there and have something "decent" (as in functional and not uglier than now) which we can incrementally cleanup or are we talking about "all or nothing"? are we talking about "all or nothing"? It's not really all that useful if you can just change header and menu with it, and not timeline, for example. SO I guess it would have to be done in one go. Between two releases of cvstrac , that is, and that doesn't have to be short time at all : ) Browsers cache CSS so it is being resent to browser only once in a while. For that reason I'm for a "global" CSS, that is, one CSS for whole cvstrac . And even that would probably be less then 20kB for default CSS. Obviously there's a need for infrastructure I'm not sure what exactly you mean by this, but as far as I can see there are no big problems here. Here's how I see that CSS infrastructure. Since cvstrac can't serve files from local file system, this would obviously had to be kept in db. We'd need a page to upload CSS to db. We could also strip white space while we're at it and that would reduce CSS size for some good 10-20%. Sending this "file" from db to browser is already implemented for attachments, so no problem here. I'm just wondering about BC. what is the goal of cvstrac in this regard? How far back do we have to look? What about text browsers? De we even care about them? I know I don't : ) Use of layers also depends on our BC strategy I guess. Those things don't really work on older browsers and they were buggy for some time, and still are to some extent. What I'm trying to say is, it was easy to have good cross browser compatibility with current cvstrac use of HTML. But it would be pretty hard if not impossible to make cvstrac work with all browsers that it does now when layers and CSS are introduced. Perhaps tables+CSS would be good compromise, but it is a compromise . Personally, I don't really care about BC since I use Firefox all the time. But if we know that there is some big user base with older browsers (from last century : ) out there, we might need to take them into consideration here. I'd really like to know where do we stand in regard to this BC stuff? "action" and "whoami" can be placed in content and nav, but I'll have to see where they are generated... I think a balance can be struck where a minimal CSS can be included (and edited via the current layout editing options) that looks just like the current markup regardless of browser - just about any browser can display background colors and bulleted lists on baseline CSS, and cvstrac doesn't need much more (except the reports table, for which I have no easy solution yet). Targeting Firefox (and keeping the fancy trickery down) would be a nice way to ensure it works properly on most browsers (including IE, Safari, etc.), and advanced customizations can simply attach a JavaScript/extra CSS files to the Wiki and reference them from the HEAD tag. Worked for me :) 0. No arguments from me on the advantages of CSS. 1. "all or nothing" is a bad situation to get into, especially when talking about something that going to touch as much code as a full CSS conversion. It's important that at any point in time, CVSTrac builds and runs as a useful product. Hence any "all or nothing" conversion happens in a branch and at some point there's a brutal merge back into HEAD. I'd prefer to take things incrementally, even if that may not be the most efficient approach. 2. That's basically what I mean by "infrastructure". A way to administer and serve up stylesheets. This needs to be planned out before anything else because I'm already seeing multiple paths. You're thinking "one stylesheet to rule them" (my preference too), RuiCarmo is talking about a minimalist CSS with optional add-on sheets in the header attached to Wiki pages (or maybe that's just how he hacks around it now?). I'd hope that a sane infrastructure wouldn't involve Wiki attachments, but my understanding on CSS is that supporting multiple stylesheets is a Good Thing. 3. My philosophy on browser compatibility (which may differ from drh)... If someone is in a situation so insane that they can't run a CSS-compatible (for some definition of "compatible") browser, I'm guessing that being able to access CVSTrac isn't a high priority. People doing stuff with lynx and such are probably better off building tools directly around the the SQLite database. Anyhow, if we're generating clean HTML I'd hope older browsers would degrade acceptably enough to be functional, although CVSTrac wouldn't look as good on them as it does now. 4. As to what kind of CSS... I'd rather see CVSTrac output the simplest HTML possible and move the complexity to the stylesheet. In other words, try to get rid of table magic. On the other hand, it'd be nice if the default stylesheet is simple enough that compatibility isn't really a big deal. It sounds like this is feasible, except for the reports. 5. No JavaScript in the defaults. Please. 6. compact timeline view Sweet, from a functional viewpoint. Have to watch the scalability, though. It seems to have included all the data for the drilldown in the one page. Try that with, say, the browse view and those of us with 12000 files in our CVS repositories won't be too happy. For a timeline view, it would be okay where it doesn't bloat things, but my 30 day timeline is already cracking 100k... I may have to write up a CvstracRoadMap at some point, ... I was just about to propose a similar thing. I don't' really have much time now, but it would be nice to have at least a few bigger things on that roadmap. A few quick remarks to your comments: 1. I meant that it would be best if we don't release any official versions with partial support for CSS. I agree that CVS code should always compile and run properly. 2. I don't really see any big advantage to having multiple stylesheets. RuiCarmo would you care to elaborate as to why this is good? 3. Basically I agree. Though it would be nice to know where exactly our baseline is. IE4 + NS4? 4. Yes, getting rid of tables would be nice. But trying to emulate tables with <div>'s can be tricky, even with modern browsers. Though it is being done every day, so I guess we'll just have to try and see how good/bad it is. When I say try, I mean try it on a plain HTML file of course. Integrating it into C code is the last thing to do here. Actually I'd like to have all pages as plain HTML first. 5. Agreed. 6. Since we agree that javascript is a no-no, I don't see how above proposed compact view can be implemented? I'd expect timeline to greatly benefit from CSS integration so timeline as it is now will decrease in size considerably. I don't really see any big advantage to having multiple stylesheets. Supporting different display devices, mainly. For example, for a PDA you might want to represent the navigation bar as a space-saving popdown menu and not even display the title section (since the browser has a title bar). On the other hand, navigation isn't needed at all if you're printing a report. I don't see how above proposed compact view can be implemented? I wouldn't see it as a default. The default CSS should be functionally and visually as close to the existing HTML as possible. But it's worth structuring the HTML such that this sort of thing might be possible. I could see a timeline where each day can be collapsed down as useful. Although... including all the data in a timeline and using toggles to turn elements on and off in the display would mean larger pages, but may cut down on the number of hits on a server. It would also simplify the code for the timeline generation. Supporting different display devices, mainly. I know next to nothing about PDAs so I can't comment here. But as far as screen vs. print goes @media screen { and @media print { worked very well for me up till now. Perhaps there is a similar @media for PDAs? If we decide to go for multiple stylesheets, how do we detect what stylesheet needs to be loaded? I agree about the timeline. As long as we keep the default as it is now, I'm happy : ) An example of the kind of change we made: 389c390,391 < @ <small><a href="logout" title="Logout %h(g.zUser)">Logged in</a> as --- > @ <div id="bbc_is_logged_in"> > @ <a href="logout" title="Logout %h(g.zUser)">Logged in</a> as We added a few other dov/spans of varying classes and ids and additionally referenced our own stylesheet. <p id="identity"> <a href="logout" title="Logout cpb">Logged in</a> as <a class="user" href="wiki?p=cpb">cpb</a></p> I can see where it'd be worth adding a class the the <p> to differentiate between logged in and out. Putting actual user ids into the CSS... Not sure about that. I guess you could do some neat things like show a mugshot for the user when they're logged in. class > Is there an expected completion date for this? Not really. The "core" is complete in that basic page layout (headers, footers, menus and various wiki markup elements) can use CSS. The timeline is mostly done, although I don't particularly like how it works. I've done some work with alternate stylesheets which demonstrate that it's possible to dramatically alter the look of CVSTrac. Details like the file browser, ticket layouts, etc, however... they're going to be a while. > The checkins make it look like the priority has fallen off. Very. To quote Her Majesty, it has turned out to be an Annus Horribilis. I've been able to manage bug fixes and small bits of functionality, but further battles with CSS have definitely been low priority. Or add function let user set them by himself. Because when I write more lines, the small textarea makes me feel as look through a small window to the outsider. thanks.
https://www.cvstrac.org/cvstrac/rptview?rn=63
CC-MAIN-2022-21
refinedweb
6,689
72.56
I replaced my python3.3 installation with python2.7 but when I tried to create a django project in a virtual environment I encountered this error: - Code: Select all from django.core import management ImportError: No module named django.core I went to python shell in that virtual environment to check if django is correctly installed with: - Code: Select all from django.core import management I googled and people say its about pythonpath and path in system variable being not in sync, but they did not specify how to fix it. I already added D:\Python27\ and D:\Python27\Scripts\ in my PATH in system variables but I don't know about python path. With python3.3, everything went smoothly. But I need to use python2.7. Thanks.
http://www.python-forum.org/viewtopic.php?p=6942
CC-MAIN-2015-40
refinedweb
128
66.74
Are you sure? This action might not be possible to undo. Are you sure you want to continue? P FOR PLUNDER Morocco’s exports of phosphates from occupied Western Sahara Fertilizer companies from across the globe import controversial phosphate rock from Western Sahara, under illegal Moroccan occupation. This report shows which of them imported in 2016. 2016 From 2016, a joint-venture in India has from nowhere become the second biggest importer of phosphates from occupied Western Sahara. Published Photos The report can be freely Western Sahara Resource To strengthen our research 25 April 2017, Brussels Berserk Productions (p. 1), reused in print or online. Watch (WSRW) is an and intensify our international Saharawi Campaign against For comments or questions international organization campaigns WSRW needs Published with generous the Plunder (p. 2), on this report contact based in Brussels. WSRW, your help. Learn how to financial support from Mohamed Dchira (p. 9), coordinator@wsrw.org a wholly independent non- make monetary donations at Emmaus Åkvarn Adam Gamble (p. 15), governmental organization, Rick Voice (p. 16), works in solidarity with the ISBN WSRW.org (p. 18, 19, 20, 21, 29, people of Western Sahara, Print: 978-82-93425-15-1 30, 31), researching and campaigning Digital: 978-82-93425-16-8 Rick Vince (p. 21), against Morocco’s resource John Tordai (p. 32) plundering of the territory. Front page The world’s largest conveyor Design belt transports phosphate Lars Høie rock from Bou Craa mines to the coast 2 Executive Summary All life on the planet, and so all agricultural production, depends on phosphorus, P. The element is found in phosphate rock and turned into fertilizers. For the people of Western Sahara, their P does not grow into benefits. It’s rather the opposite. For the fourth time, Western Sahara Resource Watch publishes a detailed overview of the companies involved in the purchase of phos- phates from occupied Western Sahara. The illegally exploited phosphate rock is the Moroccan government’s main source of income from the territory it holds contrary to international law. Representatives of the Saharawi people have been consistently outspoken against the trade, both in the UN, generally, and to specific companies. The list we present in this report is complete for calendar year 2016, naming all shipments of phosphates from occupied Western Sahara. This report attributes the purchases of Morocco’s production in Western Sahara in 2016 to eight identified and one unknown importing companies in eight countries internationally. The report details a total exported volume from Western Sahara in 2016 at 1.86 million tonnes, with an estimated value of $213,7 million, shipped in 37 bulk vessels. That constitutes a slight increase in exports from the year before, after an unusually low export in 2015 due to infrastructure failures for the exporter. The largest importer in 2016 was Agrium Inc. from Canada. Several clients internationally have abstained from the controversial imports over the last year. A remarkable development of 2016, is was the entry into the game of a subsidiary of OCP in India. OCP exported to its own company in India a volume of 344,000 tonnes phosphate rock, at a value of $ 39,6 million, making OCP’s Indian joint-venture the second biggest importer of OCP’s own exports from Western Sahara. Of the nine identified importing companies in 2016, three are registered on international stock exchanges or are majority owned by enterprises which are listed. All have been subject to blacklisting by ethically concerned investors because of this trade. Of the remaining six companies not registered on any stock exchange, two are farmer owned cooperatives in New Zealand, two are fully or partially owned by the Government of Venezuela, one is partially owned by the Government of India, while one is privately owned. WSRW calls on all companies involved in the trade to immediately halt all purchases of Western Sahara phosphates until a solution to the conflict has been found. Investors are requested to engage or divest unless action is taken. List of Abbreviations DWT Deadweight tonnage OCP Office Chérifien des Phosphates SA UN United Nations US $ United States Dollar 3 The Controversy Morocco’s claim to sovereignty over Western Sahara in Western Sahara at 2.6 million tonnes annually.4 Though Unemployed Saharawi is not recognised by any state, nor by the UN. Its OCP claims that Bou Craa mines represent only 1% of all graduates protested arguments to claim the territory were rejected by the phosphate reserves exploited by Morocco5, no less than against OCP’s employ- International Court of Justice.2 a quarter of its exported phosphate rock departs from ment policies from The UN Legal Office has analysed the legality of El Aaiun.6 The exceptionally high quality of Western Saha- December 2015. For over petroleum exploration and exploitation in Western ra’s phosphate ore makes it a much coveted commodity a month, there were Sahara, a resource extraction activity – one now in for producers of fertilizers. daily demonstrations in exploration stages – that is of a similar nature. The However, that tale could be coming to an end. the streets of El Aaiun. UN concluded that “if further exploration and exploita- The Bou Craa phosphate deposit consists of two layers. Protests have continued tion activities were to proceed in disregard of the Until 2014, only the first, top layer had been mined. This ever since. In March 2017, interests and wishes of the people of Western Sahara, particular layer contained phosphate rock of the highest a group of 60 unem- they would be in violation of the international law quality across all reserves controlled by OCP. In 2014, ployed Saharawis took principles applicable to mineral resource activities Bou Craa phosphate mining moved on to the second control of a bus owned in Non-Self-Governing Territories.” Drawing on the layer, which is of lower quality.7 Morocco has sold all by Phosboucraa, and subsequent judgement of the Court of Justice of the of the high quality phosphate that ought to have been threatened to collec- European Union and the Legal Opinion of the Office of available to the Saharawi people upon realizing their tively self-immolate in Legal Counsel of the African Union, international law right to self-determination. protest of “the systematic actually places the consent of the people of Western OCP claims that Phosboucraa is the largest private marginalisation of Saha- Sahara as the prerequisite for any activity in relation to employer in the area, with around 2,100 employees8 – rawis by the Moroccan the occupied territory, even without it being necessary more than half of those are said to be locally recruited. occupying regime”.1 to determine whether such activity is likely to harm or It also alleges that Phosboucraa is a major provider on the contrary benefit the people.3 of economic viability and well-being of the region’s Yet, only weeks after the 1975 invasion of the ter- inhabitants. OCP equally boasts the social impact of ritory, the phosphorus of the Bou Craa mine in Western Phosboucraa, in terms of providing pensions to retirees, Sahara was being exported to fertilizer companies in medical and social advantages to employees, retirees North America, Latin America, Europe and Australasia. and their families, etc.9 OCP presents the purported The Bou Craa mine is managed by the Office Chérifien economic and social benefits as a justification for its des Phosphates SA (OCP), now known simply as OCP exploitation of phosphate mines outside of Morocco’s SA, Morocco’s national phosphate company and today long-settled, internationally recognized borders.10 responsible for that country’s biggest source of income Morocco uses the Bou Craa phosphates for its politi- from Western Sahara. cal lobby-work to gain the support of other countries Phosphates de Boucraa S.A. (Phosboucraa) is a for its illegal occupation. An official Moroccan government fully owned subsidiary of OCP. Its main activities are the document leaked in 2014 literally states that Western extraction, beneficiation, transportation and marketing Sahara’s resources, including phosphate, should be used of phosphate ore of the Bou Craa mine, including opera- “to implicate Russia in activities in the Sahara”. The tion of a loading dock and treatment plant located on the document goes on to say that “in return, Russia could Atlantic coast at El Aaiun. OCP puts production capacity guarantee a freeze on the Sahara file within the UN.”11 4 “Western Sahara has been under “The Council does not believe Moroccan occupation since 1975 that the company has been able and is on the United Nations’ to show that the business is list of non-self-governing consistent with the interests and territories that should be wishes of the local population. decolonised. The UN’s legal Based on an assessment that counsel stated in January 2002 further dialogue will not be that exploration of mineral productive, the Council has resources in Western Sahara recommended that the AP Funds without local consent would divest Agrium.” be in breach of the International Swedish Ethical Council, 9 April 2015, explaining why Covenant on Civil and Political all Swedish government funds have now divested Rights and the International from Agrium Inc.16 Covenant on Economic, Social and Cultural Rights.” Swedish government pension fund, AP-Fonden, upon exclusion of PotashCorp and Incitec Pivot from its portfolios.12 “Agrium’s purchase of phosphates “Companies buying phosphate from Western Sahara by means from Western Sahara are in of a long-term contract with reality supporting Morocco’s OCP constitutes an unacceptable presence in the territory, since risk of complicity in the violation the phosphate is sold by the of fundamental ethical norms, state-owned Moroccan company and thereby contravenes KLP’s OCP and it must be assumed guidelines for responsible that the revenues generated by investment.” the operation largely flow to the Norwegian insurance company KLP regarding Moroccan State. In its present its divestments from Agrium Inc.13 form, OCP’s extraction of phosphate resources in Western Sahara constitutes a serious violation of norms. This is due both to “Illegal exploitation of the fact that the wishes and natural resources” interests of the local population Fonds de Compensation commun au régime général are not being respected and to de pension, Luxembourg, 15 November 2014, upon the fact that the operation is blacklisting of all involved phosphates companies.14 contributing to the continuance of the unresolved international legal situation, and thus Morocco’s presence and resource exploitation “Human rights violations in in a territory over which it does Western Sahara” not have legitimate sovereignty.” PGB Pensioenfonds, the Netherlands, third quarter of Council of Ethics of the Norwegian government’s 2015, upon excluding OCP SA from its portfolios.15 pension fund, upon blacklisting Innophos Holdings Inc. in January 2015.17 5 The Shipments Vancouver Baton Rouge Barranquilla Puerto Cabello 6 In 2016, 1,86 million tonnes of phosphate rock was transported out of Western Sahara. WSRW traced the entire flow. Klaipeda Paradip Portland Geelong Tauranga Napier Invercargill Bluff 7 The Moroccan 1947: Western Sahara’s phosphate reserves are discovered 130 kilometres and Mauritania, while retaining a 35% share of the Bou Craa mine. No take-over of southeast of El Aaiun in a place called Bou Craa. The discovery of state in the world, the UN, nor the people of Western Sahara, recognises the Bou Craa phosphate reserves is the first potential source of mineral revenues for the transfer of authority from Spain to the two states. Mauritania withdraws mine colonial power, Spain.18 July 1962: The Empresa in 1979, admitting it had been wrong to claim and to occupy the territory. Nacional Minera del At the same time in 1975, Sahara is founded in recouping his authority order to operate the mine, after two failed coups which is owned by a d’état, Morocco’s King Spanish public industrial Hassan II orders the sector company. Moroccan army to invade Western Sahara. The King May 1968: The company may have hoped that this is renamed Fosfatos would give Morocco as de Bucraa, S.A., also much leverage to deter- known as Phosboucraa mine world phosphate or Fos Bucraa. prices as OPEC has over oil prices.19 1972: Spain starts to operate the mine. Many 1 January 1976: The Spaniards find employ- Madrid Accords come ment in the mines, as did into effect and after a the Saharawis; the native transition period of 16 population of the Spanish months OCP would take Sahara, as the territory is over the management known at the time. of the mines.20 1975: Mounting inter- 2002: Spain sells its 35% national pressure to ownership of Bou Craa. decolonise forces Spain to come up with a 2014: OCP files for public withdrawal strategy subscription on the Irish from Spanish Sahara. Stock Exchange an A UN mission that was inaugural bond issue of sent to Spanish Sahara US $1.55 billion.21 It files a in view of an expected similar debt financing pro- referendum predicts that spectus on the Exchange Western Sahara could a year later.22 very well become the world’s second largest 2017: Morocco continues exporter of phosphates, to operate the mine in after Morocco. Main- occupied Western Sahara. taining a claim to the phosphate deposits is a key consideration for the colonial power. Failing to decolonise Western Sahara properly, by allowing the people of the territory to exercise their right to self-deter- mination, Spain strikes a deal; through the Madrid Accords. It illegally trans- fers administration over the territory to Morocco 8 Large plans Peak P The world’s longest An investment and development program worth US $2.45 Phosphate is a vital component of the fertilizers upon conveyor belt (above) billion has been developed by OCP across all its operations which much of the global food production and food transports the rock from the period 2012-2030. In that timeframe, the program security depends. For some time, there has been concern the mine inland out to will aim to modernize the Bou Craa mine, develop deeper about the world population’s reliance on a finite supply the sea. Continental and phosphate layers, create higher added-value products of phosphorus, and the implications of this for agricul- Siemens are key partners for exports, increase the El Aaiun harbour capacity for tural productivity, food prices and nutrition, particularly for this belt. The Siemens phosphate activities and expand the social and sustaina- in developing countries. The term “peak phosphorus” windmills, built in 2013, ble development projects in the Bou Craa area.23 has joined the concept of “peak oil” in the lexicon of 21st provide all energy needed OCP states that, as part of its long-term investment century scarcity. There are no substitutes for phospho- for the belt system. program, industrial development investments are planned, rus in agriculture.26 such as mining investments (worth around US $250 Morocco, including Western Sahara or not, controls million) that will include the building of a flotation/washing the world’s biggest phosphate reserves and is the third unit and upgrading of extraction equipment, as well as largest producer of phosphates in the world.27 new infrastructure to extract lower phosphate layers.24 The increasing global need for phosphate rock and On 7 November 2015, exactly 40 years after fertilizers was a contributing factor in the oddly fluctu- Morocco’s invasion of Western Sahara, OCP announced ating market price of the commodity in 2008. As global it would invest $1.9 billion in Phosboucraa. The stated food demand and food prices have increased, there has main objective is to develop Phosboucraa’s industrial been an added demand for phosphate. In this report, the capacity, in particular by installing a fertilizer production average price of phosphate in 2016 is calculated at an plant. In addition the logistic capacity of Phosboucraa is average of US $112/tonne. The year was comparatively apparently to be reinforced.25 very stable in the price of the commodity. 9 The Exports 2016 2015 Exported amount of phosphate 1,858,000 tonnes 1,410,000 tonnes Value of exported phosphate $213.7 million $167.8 million Estimated cost of production $80 million $80 million Estimated revenue to OCP $130 million $90 million Value of largest single shipment from the territory $8.325 million $8.6 million Value of smallest single shipment from the territory $1.725 million $1.8 million Number of ships that departed with phosphate from the territory 37 30 Average amount of phosphate exported in each ship 50,000 tonnes 47,000 tonnes Average value of phosphate exported in each ship $5.6 million $5.55 million Average annual phosphate price of Bou Craa rock (per tonne) $112 $118 Methodology El Aaiun harbour for 2016. This could in turn have led to a Bou Craa in 2014 was confirmed This report is made from data However, WSRW cannot exclude difference in the production level in the volume of “processed” gathered through continuous a possibility that one or more and export level. phosphates as mentioned in vessel tracking. Phosphate vessels have gone undetected. Another possible explana- OCP’s Prospectus filed on the prices were obtained from tion is that Lifosa/EuroChem Irish Stock Exchange.31 After the the commercial commodities Fluctuating export levels could have underreported their unusually low level of exports pricing website “Index Mundi” and In general, WSRW’s calculations purchase of Bou Craa rock in 2015, the 2016 volume is more checked against other sources. over the last years are confirmed to WSRW, in only confirming in line with the levels we have The amounts of phosphate in OCP’s own reports.28 68,000 tonnes out of the 113,000 observed in the past. loaded into ships were ordinarily There is a slight discrepancy tonnes we traced to the port of OCP estimates the Bou Craa calculated to be 95% of the ship’s between WSRW’s and OCP’s Bou Klaipeda. A third possibility is that reserves at 500 million tonnes.32 overall cargo (and bunker fuel Craa figures for 2015. WSRW’s 2015 WSRW might not have spotted Bou Craa contributes around and stores) capacity expressed projection of 1.41 million exported all shipments in its continuous 7% of OCP’s total extracted in deadweight tonnes (DWT). In tonnes, as described in last year’s monitoring. volumes33, and around 25% of its cases where ships were less report, turns out to be more Until 2006 the export of total sales of phosphate rock.34 than 40,000 DWT the 95% factor conservative than OCP’s figures phosphate rock averaged 1.1 mil- was reduced to account for a for the extracted tonnage at the lion tonnes annually, considerably higher relative amount of fuel Bou Craa mine, which less than the production capacity and provisions and, occasionally, they put at 1.6 million tonnes.29 of 2.6 million tonnes.30 In the late heavy weather likely encountered Several reasons could explain the 1970s, production stopped for en route to destination ports. difference between our export three years during armed conflict Ships were tracked and confirmed estimates and OCP’s extraction in the territory, only gradually to have arrived at stated destina- figure. First, there is a difference achieving 2.0 million tonnes by tions. Where possible, estimated in definition between ‘extraction’ the late 1990s. WSRW started loaded amounts were checked and ‘export’. The port of El Aaiun the daily monitoring in 2011. Our against shipping documents, experienced significant loading first report put OCP’s exports of including bills of lading and port problems during the first three phosphate mined in Bou Craa at arrival receipts. months of 2015, while extraction 1.8 million tonnes in 2012 and 2.2 WSRW believes that is has at the mine went on, which may million tonnes in 2013. WSRW’s detected, tracked and accounted have resulted in growing piles projection of 2.1 million tonnes of for all vessels departing from of unshipped rock at the docks. exported phosphate rock from 10 Imports per importing country Figures in metric tonnes. Colombia 58,000 Venezuela Other (Australia, 68,000 Venezuela, Ukraine) Lithuania 182,000 68,250 Australia Canada Lithuania USA 105,000 579,000 113,000 474,000 New Zealand 204,000 2015 New Zealand 349,000 2016 USA India Canada 287,000 344,000 442,000 Clients per nationality of (parent) company Figures in metric tonnes. Other (Switzerland, Russia, Australia, Russia/Switzerland Unknown) Venezuela 68,250 201,000 126,000 Australia Venezuela 105,000 94,000 New Zealand 204,000 2015 Canada 916,000 India/Morocco 344,000 2016 Canada 866,000 New Zealand 349,000 Value per importing country Figures in $ US Colombia Other (Australia, 6.7 million Venezuela, Colombia, Venezuela Ukraine) 7.8 million 21,5 million Lithuania 7.8 million Canada Lithuania Australia 66,6 million 13 million 12.1 million USA 2015 2016 56 million USA 33 million New Zealand 25 million Canada India New Zealand 52 million 39.6 million 40.1 million 11 The Importers, 2016 Rank Corporation Home country of Import destination Number of Amount of phosphate Value of phosphate (parent) company shipments purchased (tonnes) purchased (US $) 1 Agrium Inc. Canada Vancouver, Canada 10 579,000 $66,6 million 2 Paradeep India/Morocco Paradip, India 6 344,000 $39.6 million Phosphates Ltd. 3 Potash Corporation Canada Geismar, USA 4 287,000 $33 million of Saskatchewan Inc. 4 Ravensdown New Zealand Napier, New Zealand 4 188,000 $21.6 million Fertiliser Co-op Ltd. 5 Ballance New Zealand Tauranga/Bluff Cove/Invercargill, 3 161,000 $18.5 million Agri-Nutrients Ltd. New Zealand 6 Incitec Pivot Ltd. Australia Portland, Australia 3 105,000 $12.1 million 7 Lifosa AB Switzerland/Russia Klaipeda, Lithuania 1 68,250 $7,8 million 8 Unknown Venezuela Puerto Cabello, Venezuela 3 68,000 $7.8 million (Venezuelan Government) 9 Monomeros SA Venezuela Barranquilla, Colombia 3 58,000 $6.7 million 12 OCP’s helpers at the mine SIEMENS German engineering company Siemens constructed the Foum el Oued wind park in occupied Western Sahara in 2013. The park was commissioned by Morocco’s national agency for electricity, ONEE. Siemens collaborated with the Moroccan wind energy company NAREVA – owned by the King of Morocco. Foum el Oued, consisting of 22 wind mills, today supplies 95% of Phosboucraa’s energy needs. In other words: practically all energy required for the exploitation and transport of the phosphate rock in Western Sahara, is generated by wind mills delivered by Siemens. The green energy production is thus making Morocco’s plunder of the territory even more lucrative.35 ATLAS COPCO Swedish industrial company Atlas Copco in 2008 sold important drill rigs to OCP for use in the Bou Craa mine. Through the sales, Atlas Copco also obliged itself to provide maintenance and spare parts to the same rigs. It is not known for how many years Atlas Copco is tied to that contract. WSRW first confronted Atlas Copco about its deliveries in May 2013.36 While the company appeared open to meet with WSRW at first, it later declined. WSRW sent Atlas Copco another letter on 27 March 2017, detailing our concerns and questions. The company replied that it did not wish to respond.37 CONTINENTAL A subsidiary of German company Conti- nental, ContiTech, plays a key role in the maintenance of OCP’s long belt carrying phosphate rock from the mine out to the sea. The company states having supplied systems allowing a throughput on the belt of “2000 metric tons per hour and a belt speed of over four meters per second”.38 In a letter to WSRW on 10 April 2017, Continental explains that it receives continuous orders to the Bou Craa conveyor belt. In April 2017, the company began constructing belt components in a proper factory in Morocco.39 The German company Siemens is provid- ing all the energy needed at Morocco’s illegal phosphate mining operation in Western Sahara. Hundreds of refugees protested against Siemens in 2016. 13 Companies involved in the trade Eight known companies and co-operatives involved in the imports of Western Sahara phosphates have been identified. One Venezuelan import remains somewhat unclear. The companies on the following pages are listed in the order of their involvement in 2016. The uncertainty concerning Venezuela are related to the impos- sibility of identifying which of the Venezuelan government owned companies are importing. 14 1 OCP SA (Morocco/Ireland) OCP SA is a Moroccan state-owned company, which since 1975 has been in operation of the mine in Western Sahara. The work is carried out through its subsidiary Phosphates de Boucraa S.A. A primarily state-owned company, it is not possible for foreign investors to buy shares in OCP. However, OCP bonds have been offered to investors through the Irish Stock Exchange since 2014. Several institutional investors have since blacklisted OCP from its portfolios for its involvement in Western Sahara. OCP’s affairs at the Irish Stock Exchange are managed by Barclays, Morgan Stanley and JP Morgan; multinational financial services corpora- tions based in the UK and USA. The company has commissioned the firms DLA Piper, KPMG, Covington & Burling, Palacio y Asociados and Dechert LLP to advocate the suppositious legality of OCP’s operations in Western Sahara. Besides carrying out lobbying-campaigns, the mentioned companies also write reports that allege the legal solidity of the Bou Craa exploitation on the grounds of being supposedly beneficial to the Saharawi people. None of these reports have been made available to Saharawis or to WSRW. “Further to the emails I sent you on 19 October 2013, 15 November 2013, 15 July 2014, 10 September 2014, 16 October 2014, 10 February 2015, 4 March 2015, 5 January 2016, 14 January 2016, 2 February 2016, 10 February 2016, 15 March 2016, 30 August 2016, 27 September 2016, 13 October 2016, 16 November 2016, Saharawi refugee Senia Bachir Abderahman, on 13 March 2017, for the 12 January 2017 and 13 February 2017, nineteenth time asked OCP for copies of reports commissioned by OCP I am writing once again to follow up which the importing companies have claimed document the legality of on some very important questions.” their operations. She has received no reply. 15 2 Agrium Inc (Canada) Agrium Inc. is a global producer and marketer of nutrients for agricultural and industrial markets. Agrium is a public traded company, based in Calgary, Canada. The company is listed on the New York Stock Exchange and the Toronto Stock Exchange. Agrium signed a contract with OCP in 2011, and announced it would start importing in the second half of 2013. The phosphates, imported in order to replace an exhausted source in Canada, were claimed to be originating from “Morocco”.40 However, they do not. The phosphates are from Western Sahara. A first shipment arrived in the Canadian west coast port of Vancouver in October 2013.41 Agrium then transports the landed phosphate from a dock in Vancouver, by rail to a fertilizer manufacturing plant in Redwater, in the province of Alberta. In 2016, Agrium’s commissioned an assessment of the firm’s impact on human rights in Western Sahara, carried out by Norton Rose Fulbright Canada LLP. The report contains several flaws in terms of content, analy- sis and methodology. The analysis explicitly underlines that it “is beyond the scope of this Assessment” to conclude whether or not Morocco is the administering power of Western Sahara. Yet, the report’s assessment repeatedly takes for granted that Morocco is the administering power, and that it therefore has a right to manage the resources of the territory. This report is used today to convince investors that the company’s operations are ok. At the same time, Agrium commented to WSRW on 30 March 2017 that “any issues you may have with its content or the background work that they did, should be taken up with [Norton Rose Fulbright] directly and we would be happy to help facilitate that discussion.”42 The company systematically refuses to answer any question relating to what steps it has taken to seek the consent of the Saharawi people. In 2016, Agrium received 10 shipments of phosphate rock sourced in Western Sahara, amounting to an estimated 579,000 tonnes with a total value of US $66,6 million. That is an increase compared to the 437,000 tonnes import of 2015, however well underneath the 779,000 tonnes of its first full year of importing, 2014. Agrium’s 2016 import-shipments were concentrated around the early to mid-summer of the northern hemisphere. The first shipment arrived on 11 May, and the remaining ship- ments at about four week intervals, with two in December completing the imports for the year. Agrium stated to WSRW on 30 March 2017 that the company’s sup- ply agreement with OCP “was always viewed as an interim arrangement while we looked at other alternate sources of rock and also longer term Canada’s Agrium started arrangements which may include internal ones.” This is the same kind of importing phosphates response it has given to investors in the past, and appears to have been from Western Sahara for the position of the company for several years. the first time during Agrium announced in 2016 that it will merge with PotashCorp. The the autumn of 2013. merger is expected to materialise in 2017. Agrium states that the merger The bulk vessel Ultra might change the supply arrangements of phosphate rock. Rocanville is here seen in Vancouver harbour with phosphates from the occupied territory, on 3 June 2016. At least half of all transports to Vancouver are made by the Danish shipping company Ultrabulk A/S. 16 3 Paradeep Phosphates Ltd (India/Morocco) Paradeep Phosphates Limited (PPL) produces, markets and distributes phosphate-based fertilizers and by-products for agricultural use.43 The company was established in 1981 as a joint venture of the government of India and the Republic of Nauru. In 1993, the government of India took complete ownership of the company. Due to significant losses near the end of the nineties, the government of India decided to divest 74% in February 2002. That stake was bought by Zuari Maroc Phosphates Ltd, a 50-50 joint venture of Zuari Industries Ltd (a subsidiary of Adventz Group of India) and Maroc Phosphore SA – a wholly owned subsidiary of OCP. 44 Today, PPL operates as a subsidiary of Zuari Global Limited, which holds 80.45% stake, while the government of India holds the remaining 19.55%.45 In other words, PPL is owned by the Government of Morocco, an Indian private conglomerate (Adventz Group) and the Government of India. PPL is headquartered in Bhubaneswar, India and receives its phosphate rock at the port city of Paradip, approximately 120 kilometers to the east.46 According to WSRW’s research, PPL received six shipments of phosphate rock from occupied Western Sahara throughout 2016, totaling approximately 344,000 tonnes worth an estimated US $39.6 million. PPL received its first consignment of the reviewed year on 27 May. There- after, shipments arrived at about seven week intervals. The shipments averaged around 57,000 tonnes. A possible additional shipment to PPPL has been discounted, and thus does not feature in the above totals. This was the case of the m.v. Orient Lucky, confirmed to have arrived at El Aaiun on 17 December 2015 and to have departed on 31 December of that same year. The ship, however, did not appear to anchor at the loading dock at the port of El Aaiun, and was later observed to have loaded a commodity at the phosphate loading dock in Casablanca, Morocco, over the period of 3 to 7 January 2016 before proceeding to the port of Paradip. PPL has imported from occupied Western Sahara before. WSRW has traced a previous purchase from Phosboucraa during the financial year 2011-2012.47 WSRW has contacted PPL and Zuari in February 2015, but received no reply. WSRW again contacted PPL on 7 March 2017, and received no reply.48 17 4 Potash Corporation of Saskatchewan Inc (US/Canada) Potash Corporation of Saskatchewan Inc. (PotashCorp) is the company with the longest track record of importing from the occupied territory; upon acquiring Arcadian Corp in 1996, PotashCorp also inherited the firm’s 1980s import contract with OCP. PotashCorp has been purchasing Saharawi Phosphate rock for two uninterrupted decades. PotashCorp is based in Saskatchewan, Canada, and is registered on the Toronto Stock Exchange (TSX – PCS). PotashCorp operates a phosphoric acid plant in Geismar, Louisiana, USA, where phosphate rock from Western Sahara is imported and processed. The company imports via long-term agreements with the Moroccan state-owned OCP, and prices and volumes are set at prescribed dates through negotiation. In 2016, PotashCorp purchased around 287,000 tonnes of phosphate rock from occupied Western Sahara, worth approximately US $33 million. The imported volume presents a significant decline from the 474,000 tonnes the company took in during 2015 – making PotashCorp that year’s top importer. PotashCorp’s 2016 imports came in four shipments, at more or less at quarterly intervals, ostensibly to meet a constant demand for phosphorus in the manufacturing of food products. Through the years, PotashCorp has several times changed its position statement on Western Sahara, entitled “Phosphate Rock from Western Sahara”. The sixth and most recent revision was published in November 2016.49 In it, PotashCorp attempts to defend its imports from Western Sahara by repeating the Moroccan government mantra that it permissible to exploit the Bou Craa mines as long as the “local population” stands to gain some benefits through the activity. The company has previously The vessel Double Rejoice loading phos- referred to EU agreements to defend this stance, but it has from 2016 phate at the pier in El Aaiun, occupied stopped mentioning the EU altogether. PotashCorp also maintains that its Western Sahara, 5 December 2012. involvement is non-political. The company claims it cannot cease importing The vessel headed then to PotashCorp, because of contractual commitments and because doing so would involve US. In the background a queue of bulk a “political judgment” that could determine the “economic well-being of vessels waiting to load. the region”. PotashCorp neglects to mention the cornerstone principle of self-determination in its position paper. In March 2017, WSRW asked PotashCorp whether it has sought the consent of the Saharawi people, as dictated by the CJEU. In its reply, the company dodged the question altogether, instead reiterating its convic- tion that OCP’s operations “provide economic and social benefits to the Saharawi people”. PotashCorp sees itself as a positive influence to OCP’s behavior. “Any decision to cease doing business in the region on the basis of a political judgment could undermine the economic well-being of the region”, the firm asserts.50 PotashCorp announced in 2016 that it will merge with Agrium. The merger is expected to materialize in 2017. 18 5 Ravensdown Ltd (New Zealand) Ravensdown Fertiliser Co-operative Limited is a producer of agricultural fertilizers that operates as a farmer owned co-operative that is not listed on any stock exchange. The company imports to its plants in Lyttelton, Napier and Otago, New Zealand. WSRW tracked four shipments to Ravensdown during 2016, totaling an estimated 188,000 tonnes with a net value of around US $21.6 million. That means that the company has imported a significantly larger volume in 2016 than in the two preceding years, when it took in around 100,000 tonnes per year. Ravensdown has thus returned to its pre-2014 import level, which averaged around 180,000 tonnes annually. WSRW asked the company about the trade on 8 March 2017, and received no answer.51 The bulk vessel Molly Manx of UK shipping company LT Ugland Shipping on 12 August 2016 arrived the port of Napier, New Zealand with phosphate rock from Western Sahara. 19 6 Ballance Agri-Nutrients Ltd (New Zealand) Ballance Agri-Nutrients Limited manufactures, markets and distributes fertilizers and related products in New Zealand. The company has man- ufacturing plants in Whangarei, Invercargill and Mount Maunganui, New Zealand. It is a farmer-owned cooperative, and not registered on any stock exchange.52 Ballance was previously known as BOP Fertiliser. The company changed its name to Ballance Agri-Nutrients Ltd in 2001. Before BOP Fertiliser would purchase plants and buy shares in other NZ based fertilizer companies. For example; BOP bought the Whangarei based plant from Fernz in 1998, while obtaining a 20% share in Fernz a year later.53 At that time Fernz was already a long term client of Bou Craa phosphates. The firm signed a long-term agreement with OCP in 1999, requiring OCP to supply phosphates to Ballance.54 Ballance executives have on at least one occasion visited the Bou Craa mine in the occupied territory.55 During the course of 2016, Ballance received three shipments of phosphate rock illegally excavated in occupied Western Sahara. The cargoes have a projected combined volume of 161,000 tonnes worth around US $18.5 million. This is consistent with the firm’s imports of 2012 through 2014, with a decrease to 104,000 tonnes in 2015. WSRW has contacted Ballance once a year from 2014 to 2017, but never received an answer.56 In 2014, Ballance wrote to WSRW that “The United Nations does not prohibit trade in resources from Western Sahara. Nor does such trade contravene a United Nations legal opinion”.57 Frederike Selmer at the port of Bluff, 12 March 2014, after discharging approx. 53,000 tonnes of phosphates from Western Sahara. The local importer is Ballance Agri-Nutrients. 20 7 Incitec Pivot Ltd (Australia) Incitec Pivot Ltd, also referred to as IPL, is an Australian multinational corporation that engages in the manufacturing, trading and distribution of fertilizers. The company’s fertilizer segment includes Incitec Pivot Fertilisers (IPF), Southern Cross International (SCI) and Fertilizers Elimination (Elim). Incitec Pivot has been importing from Western Sahara for the past 30 years. Since 2003, when Incitec Pivot arose out of a merger between Incitec Fertilizers and Pivot Limited, the company has been importing continuously. Incitec Pivot has its headquarters in Melbourne, Victoria, Australia, and is registered on the Australian Securities Exchange. Today, Incitec Pivot is the largest supplier of fertilizer products in Australia, but also markets its products abroad, such as in India, Pakistan and Latin America.58 IPL manufactures a range of fertilizer products, but uses the Saharawi phosphate for its so-called superphosphate products produced at plants in Geelong and Portland.59 For the calendar year 2016, Incitec Pivot procured three shipments of phosphate rock from Western Sahara, totalling 105,000 tonnes, worth an estimated US $12.1 million. That is a substantial increase from its 2015 imports of 63,000 tonnes of Saharawi phosphate, as confirmed by the company. WSRW last wrote IPL on 27 March 2017.60 Ithaki spotted off Las Palmas harbour on 20 May 2015, shortly after departure from El Aaiun harbour. 21 8 Lifosa AB (Lithuania/Switzerland/Russia) In February 2016, and after several years of correspondence with the company, Lifosa’s parent company EuroChem wrote to WSRW that “… the Group does not intend to purchase phosphate rock from Western Sahara in 2016 or at any time over the foreseeable future.” However, things have not gone perfectly as planned. Lifosa AB is a producer of phosphate mineral fertilizer based in Kedainiai, Lithuania. The company was previously listed on the NASDAQ QMX Vilnius Exchange. Lifosa AB became a subsidiary of the privately Rus- sian-owned Swiss based EuroChem Group in 2002. The company receives its Western Sahara phosphate rock at the harbor of Klaipeda, Lithuania. On 8 October 2016, eight months after the EuroChem’s promise, the bulk vessel SBI Flamenco arrived at the port of Klaipeda, with rock from Western Sahara. That cargo was destined for Lifosa, as Lifosa’s managing director admitted to Lithuanian media.61 EuroChem confirmed to WSRW on 23 March 2017 that the subsidiary Lifosa had imported 68,250 tonnes on board the SBI Flamenco. “EuroChem believes in vertical integration for economic and strate- gic reasons and this remains the case. We aspire toward the goal of raw material self-sufficiency and our investments in Kazakhstan and Kovdor- skiy were intended to help us become self-sufficient in the production of phosphate rock. The production of our own raw materials from these two investments has progressed at a slower pace than projected and so we continue to require third-party supplies of phosphate rock.” WSRW has been in dialogue with both Lifosa and its owners EuroChem Group since 2010. But the company’s initial reluctance to thor- oughly respond to WSRW’s questions resulted in its June 2011 delisting from the UN List of Socially Responsible Corporations. Ever since, Lifosa/EuroChem has actively sought ways to maintain its dialogue with WSRW and conduct further due diligence with regard to importing from Western Sahara while under occupation. The company stated in March 2014 that it was seeking to implement ways to diversify external purchases. In 2013-2014, the trade was around 400,000 tonnes annually. The vessel SBI Flamenco seen upon discharging Western Sahara phosphates at the port of Klaipeda, in October 2016. Lifosa took in this single shipment in 2016, after having promised it would not do so. 22 9 The Venezuelan Government (Venezuela) Most companies that import from Western Sahara are privately owned. There is one exception; in Venezuela, the government is behind the imports. In 2016, three shipments of phosphate from the Bou Craa mine in Western Sahara were transported to the port of Puerto Cabello during the course of 2015. The shipments totalled an estimated 68,000 tonnes, to the tune of US $7.8 million. WSRW has always suspected Tripoliven C.A. to be the main importer in Venezuela, based on its track record of importing from Western Sahara in the nineties and noughties. Tripoliven C.A. is probably a joint venture between the Venezuelan state company Petroquímica de Vene- zuela S.A. (Pequiven), Valquímica S.A. and Bancaribe. FMC Corp owned a part of the joint-venture until 2016, when the FMC shares were sold to Bancaribe.62 Tripoliven’s fertilizer plant is located at the same location as its headquarters; in Morón, near the port of Puerto Cabello. However, in 2014, the Venezuelan investigative website Armando. info revealed that the registers at the Chamber of Commerce showed that two companies had purchased all cargos of Saharawi phosphate delivered in Puerto Cabello between 2012 and July 2014. These companies were Pequiven S.A. and Bariven S.A..63 Pequiven S.A., short for Petroquimica de Venezuela S.A., is Venezue- la’s state-owned petrochemical company that produces a wide range of chemical products, including phosphate-based fertilizers. Pequiven’s fertilizer production plant is also located in Morón. Bariven S.A. is a subsidiary of Venezuela’s state-owned oil company Petróleos de Venezuela S.A. (also known as Pdvsa). The company handles the procurement of materials and equipment for Pdvsa. Pdvsa inaugu- rated a petrochemical plant in Morón in 2014, aptly called Hugo Chávez, which will produce fertilizers. It is unclear how the imports that are accredited to Bariven and Pequiven relate to Tripoliven. In August 2014, Tripoliven admitted import- ing from the Bou Craa mine in occupied Western Sahara to Venezuelan investigative website Armando.info. It is uncertain whether Tripoliven’s imports are managed through its owner Pequiven. Over the years, WSRW has sent a number of letters and emails to Tripoliven. The only response WSRW ever received came in 2013, when the firm denied importing from Western Sahara. FMC Corp, part owner of the Tripoliven joint-venture, also denied in 2013 to one of its investors in Europe that its subsidiaries import from Western Sahara.64 WSRW contacted Tripoliven again in February 2015 to inquire why the company chose to deny its imports from Western Sahara, and asking for confirmation about its subsequent imports. No reply was received. Copies of WSRW’s letters have been sent to the Venezuelan government. Those too, unanswered. When approaching FMC Corp, WSRW was told that all requests had to be directed to Tripoliven. WSRW sent letters to the government of Venezuela in February 2016 and March 2017, asking for clarifications as to how the phosphate imports by Venezuelan state- owned enterprises align with the government’s favourable position on Western Sahara. No answers have been received. 23 10 Monomeros Colombo Venezolanos S.A. (Colombia/Venezuela) The Colombian Company Monomeros Colombo Venezolanos S.A. is a petrochemical Company that produces fertilizers, calcium phosphate and industrial chemicals. Since 2006, the company has been a fully owned subsidiary of the Venezuelan state owned petrochemical company Pequiven (Petroquímica de Venezuela SA). The company has its corporate seat in Barranquilla, Colombia, near the city’s port where it receives its Western Saharan phosphate cargoes. Monomeros operates as a non- listed, public limited company. Monomeros received three shipments of phosphate from occupied Western Sahara in calendar year 2016, totalling approximately 58,000 tonnes, worth about US $6.7 million. WSRW has raised the matter with both Monomeros and its parent company Pequiven on several occasions. Our most recent letter was sent on 27 March 2017.65 So far, neither Monomeros nor Pequiven have replied to any of our letters. 24 11 Innophos Holdings (USA) A Mexican subsidiary of the US registered company Innophos Holdings has for many years been a key importer of Western Sahara phosphate rock. Since 2015, WSRW has not observed any shipment into Innophos’s plant in Coatzalcoalcos, Mexico. However, WSRW believes that Innophos’s manufacture plant in Geis- mar, Louisiana, is dependent on sourcing phosphate rock from Western Sahara, sold to them via a pipeline from the plant of PotashCorp. WSRW contacted Innophos on 25 March 2017 regarding the sourcing of its raw materials to the Geismar plant, yet has not received an answer.66 From 2010 to 2016, WSRW sent Innophos five letters about the company’s purchases from the occupied territory, without receiving a reply. The lack of response from the company management has also been observed by several of its former investors. Innophos has been the subject of multiple divestments. A lengthy analysis for the ethical exclusion of the firm was prepared by the Council on Ethics of the Norwegian Government Pension Fund in 2015.67 For the same reason, the company has also been kicked out of the portfolios of the Luxembourg Pension Fund and Danske Bank, among others.68 WSRW is not convinced that Innophos is out of the picture in relation to Western Sahara phosphate rock. Here is the vessel Coral Queen en route to transport a shipment in 2013. 25 Companies under observation Some companies have in the past been identified and named as import- ers. The following companies have not been involved in the trade during recent years, but WSRW sees a risk that they would resume purchases. 26 Agropolychim AD (Bulgaria) Bulgarian fertilizer producer Agropolychim AD is located near Varna port. The company has Bulgarian and Belgian owners.69 WSRW registered the last shipment of Western Sahara phosphate rock to Varna in 2011. WSRW has confirmed shipments specifically to Agropolychim from 2003 to 2008. WSRW contacted Agropolychim in October 2008, urging the company to terminate its phosphate imports.70 A reply was never received, but the company did defend its imports in Bulgarian media. “Agropolychim has a contract for the import of phosphate from North Africa since 1974 and never had problems with supply”, the company stated.71 Indian importers In March 2014, WSRW observed a single shipment to India, unloaded at Tuticorin harbor. This follows the trend from previously years of one annual shipment arriving at Tuticorin. WSRW has not yet been able to identify the responsible company, but has identified two potential recipients. One is Greenstar Fertilizers Ltd, a fertilizer manufacturer and marketer, which produces its fertilizers in Tamil Nadu, taking in its material in Tuticorin. The other is Southern Petrochemical Industries Corporation Ltd (SPIC), a petrochemical company that has fertilizer production as its core competency. SPIC has its headquarters in Chennai and is registered on the Bombay Stock Exchange and on the National Stock Exchange of India. The firm’s phosphate business is located in Tuticorin. WSRW contacted both, they did not answer. 27 Companies no longer involved Some companies have in the past been identified and named as import- ers. These have not been involved in the trade since 2012, and WSRW sees no risk that they would resume purchases. 28 Impact Fertilisers Pty Ltd Australian superphosphate manufacturer Impact Fertil- isers imported phosphates from Western Sahara, at least from 2002 until 2012. Impact Fertilisers imported (Australia/Switzerland) the rock to Hobart, Tasmania. In 2010 the company became part of Ameropa, a Swiss privately owned grain and fertilizer trading company. Western Sahara groups in both Australia and Switzerland had worked at highlight- ing the company’s involvement for many years. In 2013 Impact announced it had halted the imports from Western Sahara.72 WSRW has not observed shipments to Impact since August 2012. Impact Fertilisers in Tasmania has not imported since the arrival of Alycia in Hobart harbour on 7 August 2012. Nidera Uruguaya S.A. The Uruguayan company Nidera Uruguaya S.A., subsidi- ary of Dutch trading company Nidera NV, received one vessel containing phosphate rock from Western (Uruguay/The Netherlands) Sahara in 2009. WSRW confronted Nidera Uruguaya with the information about the 2009 vessel in a letter on 21 June 2010.73 As no answer was received, new letters were sent to the parent company in The Netherlands in Octo- ber 2011. The outcome of the following correspondence with Nidera, was a statement from the company under- lining that “If our subsidiary in Uruguay again needs to import phosphate rock in the future, the matter which is now brought to our attention is something we shall definitively take into consideration”. The company at the time also stated that its subsidiary in Uruguay had not received any phosphate rock from Western Sahara during the years 2007, 2008, 2010 and 2011.74 29 Yara’s last imports took place in 2008, on this ves- sel. Here the vessel is on its way to dock in Herøya, Norway to offload. Yara International ASA Yara is the world’s leading supplier of mineral fertilizers. It used to be a large importer of phosphates from Western Sahara in the past, but has since decided not (Norway) to import from the territory. The main motive for the decision to stop purchasing has been that the Norwe- gian government urges Norwegian companies not to trade with goods from Western Sahara, due to concerns over international law. The company has today as a policy only to import or trade with phosphates from Morocco proper, not from the Bou Craa mines. “We hope the country will be liberated, then the population there will profit from us quickly receiving their phosphates”, Chief Communication Officer, Bente Slaatten told.75 30 Mosaic Co Wesfarmers Ltd (USA) (Australia) Mosaic Company is headquartered in Minnesota, USA, and listed on the Wesfarmers Limited is one of Australia’s largest public companies, New York Stock Exchange. WSRW confirmed 15 shipments from occu- headquartered in Perth, Western Australia. The company is listed pied Western Sahara to Tampa, Florida, USA in the period from 2001 on the Australian Securities Exchange. Its fertiliser subsidiary, to 2009. Tampa is home to the headquarters of Mosaic’s phosphate Wesfarmers CSBP, was a major importer from occupied Western operations and many of the firm’s phosphate production facilities. Sahara for at least two decades. Earliest known imports of Saharawi On 25 August 2010, Mosaic informed WSRW that it had received its phosphates by CSBP date back to 1990. last shipment of Western Sahara phosphate rock on 29 January 2009.76 In 2009, the firm announced it would “reduce the company’s In 2015, it confirmed to Bloomberg that its decision had been made dependency on phosphate rock from Western Sahara”. The company “because of widespread international concerns regarding the rights of said it would invest in new technology that would make it possible the Saharawi people”.77 to use other phosphate sources. CSBP did, however, leave open the possibility that the imports could continue, albeit to a limited degree, depending on price and availability of alternative sources.80 This decision followed a wave of European divestments over ethical concerns on trade in phosphate from occupied Western Sahara. Wesfarmers used to import between 60 and 70% of its phosphates from Western Sahara. Wesfarmers has on numerous occasions since shown a will to BASF SE phase down imports from Western Sahara, but has not committed categorically to completely stop imports. As the de facto imports seem to have stopped, some investors have returned to the company. (Germany/Belgium) WSRW has not observed any shipments to Wesfarmers since it started daily monitoring of vessels in October 2011. Other companies BASF was one of the leading importers through the 1990s. It received its BASF is not known to Three companies that have previously last known shipment to Belgium in 2008.78 BASF’s sustainability centre have imported since been on WSRW’s observation list, was confident such imports did not violate international law, but con- the arrival of the bulk are from 2016 moved over to the list firmed to WSRW that it would not expect more imports: “A part of BASF’s vessel Novigrad on of companies longer involved. The phosphate demand is covered by Moroccan phosphate delivered by 7 October 2008, here reason for this is that such a long time Office Chérifien des Phosphates (OCP). OCP has been a reliable supplier seen discharging has passed since a shipment took of phosphate from mines in the Kingdom of Morocco for over 20 years. Saharawi phosphate in place that we expect them not to be In spring 2008, OCP contacted us because of a supply shortage at the Ghent harbour, Belgium. engaged again. These are: Moroccan mine from which BASF usually receives the phosphate. OCP Petrokemija PLC from Croatia, Tata offered a temporary replacement order with phosphate in an alternative Chemicals from India; Zen Noh quality from a different mine operated by OCP in the Western Sahara from Japan. The last time we saw region, which we accepted. For the time being, this was an isolated shipments to these companies was in replacement delivery from this territory which we do not expect to be 2006. Neither of these companies have repeated in the future.”79 responded to requests from WSRW. 31 Lobbying law firms In defense of their phosphate imports to the “local population” as a validation from Western Sahara, several companies for the exploitation and subsequent have referred to legal opinions by trade to take place. different law firms retained by OCP. DLA Piper is an international law firm These legal opinions are systemati- that has offices in around 30 countries cally used by the international phosphate throughout the Americas, Asia Pacific, importers to legitimize their imports Europe and the Middle East. Palacio y vis-à-vis shareholders. The confidential Asociados is headed by Spain’s former analyses are said to establish that the Minister of Foreign Affairs and former local people benefit from the industry. MEP Ana Palacio, and has offices in However, the local people – the owners Madrid, Brussels and Washington. of the phosphates – are themselves not WSRW contacted both firms with allowed to see the opinions, and are the request to share their legal opinion thus unable to assess their veracity. All with the Saharawi people. DLA Piper aspects related to Terms of Reference, replied that it could not share the methodology or findings are thus impos- opinion that “was written for the benefit sible for the Saharawis to question. of Phosphates de Boucraa S.A., and its As the opinions allegedly have found holding company, Office Chérifien des Morocco’s exploitation of the Saharawi Phosphates S.A.” due to legal privilege.83 people’s resources lawful, WSRW believes Ana Palacio, head of Palacio y Asociados, that there is little reason to withhold wrote back to express her disagreement them from the Saharawis. with WSRW’s analysis and also cited Four international lobbying law firms legal privilege.84 are behind such undisclosed opinions. In November 2015, PotashCorp Covington & Burling LLP is an named the firm Dechert LLP and Palacio international law firm with offices in y Asociados as co-authors of a legal Europe, USA and China, which advices opinion. Dechert LLP is an international multinational corporations. Among its law firm, headquartered in Philadelphia, clients is OCP. USA, with offices in 14 countries. Both the Belgian importer BASF Up until August 2014, PotashCorp and the Spanish importer FMC Foret had named DLA Piper as the partner referred to Covington & Burling’s legal of Palacio y Asociados. It is not clear opinion made for OCP, but neither wished whether the Dechert-Palacio opinion to disclose the report. BASF at the is different from the DLA Piper-Palacio time (November 2008) urged WSRW to opinion. The missing link between the contact Covington & Burling for further two could be Myriam González Durántez, questions. WSRW had contacted the firm wife of Britain’s former Deputy Prime in February 2008, but received no reply. Minister Nick Clegg, who represented When phoning the company to ask for a OCP when working at DLA Piper, but meeting, Covington & Burling replied that who is said to have taken the OCP they “would not engage with you at all contract with her when she moved to regarding anything at all. You’re not my Dechert. OCP has reportedly paid an client, and as far as I can see you have estimated US $1.5 million for work carried no interest or stake in our company.”81 out by both Dechert and DLA Piper.85 It should be noted that Covington Dechert replied to WSRW’s letter & Burling will travel around the world to of 8 February 2016 that it could not defend the unethical trade to sharehold- disclose its legal opinion for OCP due ers looking into divesting from any of to client confidentiality.86 the companies that import phosphate WSRW has asked Dechert and from Western Sahara.82 Palacio y Asociados whether their client More recently, the law firm DLA would consent to waiving privilege, as Piper teamed up with the firm Palacio y the confidentiality of the legal opinions Asociados to provide OCP with another as already been given up by making their legal opinion to justify the trade. Based existence public. WSRW never received on statements from the importing a reply to that request. companies, this second opinion seems OCP has failed to answer requests to follow the analysis of the Covington & from Saharawis to share copies of Burling opinion, citing potential benefits the reports. 32 Morocco lobbies for more toxics in EU farmlands One of OCP’s law firms, Dechert LLP, has been instructed to lobby the EU institutions against the European Commission’s proposed cadmium regulation. Based on several risk assessments, the EU Commission wants to limit the EU population’s exposure to this heavy metal due to its adverse health effects, particularly in terms of causing several types of cancer. In 2016, the Commission proposed a regulation for fertilizers made from phosphate rock, foreseeing in the stepwise reduction of cadmium content to 20 mg/kg over a 12 year timeframe.87 Phosphate fertilizers are responsible for 60% of the current cadmium emissions to the Union’s soil and crops, as documented in February 2017 study by the European Parliament’s Policy Department.88 The phosphate rock managed by OCP – thus including the Western Sahara rock – are said to contain on average between 29.5 to 72.7 mg/kg.89 The EP Policy Department paints an even bleaker picture, citing levels of 38-200 mg Cd/kg P2O5. OCP has a sales figure of 32% in Europe. Since the proposed regulation would result in the nullification of that sales figure over time, OCP has unleashed an intense counter-lobby. OCP argues that there is not enough scientific proof to underpin the idea of limiting cadmium levels, and suggests the EU to even raise cadmium levels to 80 mg/ kg, far higher than the suggestion from the EU Commission. On 11 May 2016, OCP sent a letter to the Commission, stating it disagreed with the proposal. OCP also lamented that “major fertilizer producers […] had not been consulted”.90 The irony is that OCP itself refuses to seek the consent from the people of Western Sahara upon plundering the territory’s phosphate rock. Retained to work alongside Dechert, is the PR firm Edelman.91 Edelman has worked for the Moroccan government in the past, as it is on the payroll of the Moroccan American Center for Policy (MACP), a registered agent of the Moroccan Kingdom.92 The Bou Craa reserves are a gigantic, opencast mine, where the phosphate rock is scraped from the surface by excavation machines. 33 Recommendations To the Government of Morocco: — — To respect international law and immediately terminate the produc- tion and exports of phosphates in occupied Western Sahara until a solution to the conflict has been found. — — To respect the right to self-determination of the people of Western Sahara, through cooperating with the UN for a referendum for the people of the territory. — — To compensate the Saharawi people for the benefits it has accrued from the sales of phosphate rock from the illegally occupied territory. — — To respect the African Union Legal Opinion on Western Sahara, published in October 2015, which noted among other things that any exploration or exploitation of the territory’s natural resources is illegal as they violate the Saharawi people’s right to self-determination and to permanent sovereignty over their resources. To purchasers of phosphates from Bou Craa mine: — — To immediately end all purchasing of phosphates ilegally exported from occupied Western Sahara. To the governments of Venezuela and India: — — Abstain from further purchases of phosphate rock from Western Sahara. To the governments of Australia, Canada, Colombia, Lithuania, New Zealand, USA: — — To assess trade in phosphates originating in Western Sahara and engage with the companies concerned with a view to ending this trade. To investors: — — To engage with the mentioned companies, and divest unless action is taken to halt the purchase. — — To refrain from buying bonds of the Office Chérifien des Phosphates (OCP). To Covington & Burling, Dechert, DLA Piper, KPMG and Palacio y Asociados: — — To publish all reports written written for OCP which aim to justify OCP’s activities in occupied Western Sahara and the illegal export trade in Saharawi phosphate — — To refrain from defending Morocco’s plunder of the territory by stop- ping the undertaking of assignments to legitimise its continuation To the European Union and its Member States: — — To assess trade in products originating in Western Sahara and adopt policies that ensure that such trade is consistent with the Court of Justice of the EU judgment of 21 December 2016 and with States’ duty under international law not to recognize Morocco’s sovereignty over occupied Western Sahara. — — To develop business advisory guidelines warning of the legal and repu- tational risks of doing business with Moroccan interests in the territory. — — To ensure European companies adhere to the principles established in the Court of Justice of the EU judgment of 21 December 2016, assuring that EU companies do not purchase phosphates from Western Sahara. To the United Nations: — — To create a UN administration to oversee or otherwise administer Western Sahara’s natural resources and revenues from such resources pending the self-determination of the Saharawi people 34 Notes 1 WSRW, 28.03.2017, Unemployed Saharawi youth hi-jacked OCP bus, 25 Phosboucraa,. See also Medias24, OCP investira 18,8 milliards de DH à Phosboucraa et 2 ICJ, Advisory Opinion, 16 Oct 1975, Western Sahara, dans sa région, 8 November 2015, Paragraph 162,. ECONOMIE/ENTREPRISES/159383-OCP-investira-188-milliards-de-DH-a- php?sum=323&p1=3&p2=4&case=61&p3=5 Phosboucraa-et-dans-sa-region.html 3 UN Legal Office, S/2002/161, Letter dated 29 January 2002 from 26 United States Geological Survey, 2013, Mineral Commodity Summary the Under-Secretary-General for Legal Affairs, the Legal Counsel, 2013, addressed to the President of the Security Council. 27 Ibid. 28 OCP SA, Prospectus, p. 108. Court of Justice of the European Union, Case C-104/16 P, Council/ 29 OCP SA, Annual report 2015, p. 185. POLISARIO Front, paragraph 106 30 OCP SA, Prospectus, p. 90. docs/application/pdf/2016-12/cp160146en.pdf 31 OCP SA, Prospectus, p. 93. African Union, Legal Opinion, 2015, 32 OCP SA, Prospectus, p. 89. view_doc.asp?symbol=S/2015/786&referer=/english/&Lang=E 33 OCP SA, Prospectus, p. 90. 4 OCP SA, Prospectus – 20 April 2015, p.91 34 OCP SA Prospectus, p. 108. 5 Ibid, p. 89. 35 WSRW, 02.11.2016, report “Powering the Plunder”, 6 Ibid, p. 98. 7 Ibid, p. 91. 36 WSRW, 26.05.2013, WSRW protests Swedish supplier for BouCraa, 8 Ibid, p. 123. 9 OCP, Annual report 2015, pp. 154-159, 37 WSRW.org, 10.04.2017, Letter correspondence with Atlas Copco, 2017, default/files/alldocs/RA%20OCP%202015%20VUK.pdf 10 OCP SA, Prospectus – 20 April 2015, p. 33. 38 Continental, STAHLCORD® ST 2500, Phosboucraa (Morocco), 11 WSRW.org, 25.11.2014, Morocco admits to using Saharawi resources for political gain, website_10.04.2016.jpg 12 AP Funds, 30.09.2013, Swedish AP Funds exclude four companies 39 Continental to WSRW, 11.04.2017, accused of contravening international conventions,. dated/2017-04-11/2017.04.10_continental-wsrw.pdf ap2.se/en/news-reports/news/2013/swedish-ap-funds-exclude-four- See also letter WSRW to Continental, 29.03.2017,. companies-accused-of-contravening-international-conventions/ org/files/dated/2017-03-29/2017.03.29_wsrw-continental.pdf 13 WSRW.org, 01.12.2014, Investor blacklisted Agrium for imports from 40 Agrium Inc, 26.09.2011, Agrium executes long-term rock agreement occupied Western Sahara, with OCP S.A.,- 14 Fonds de Compensation commun au régime général de pension releases/2011/agrium-executes-long-term-rock-agreement-ocp-sa (FDC), FDC Exclusion List as of 15 November 2014, 41 The Tyee, 14.10.2013, Canadian Agri-Business linked to Moroccan fileadmin/file/fdc/Organisation/Liste_d_exclusion20141115.pdf conflict mineral,- 15 PGB Pensioenfonds, Exclusion List Q1 2016 (Fixed Income), http:// AgriBusiness-Morocco/ 42 Agrium to WSRW, 30.03.2017, Exclusion%20List%20Q1%202017%20(Period%20January%201st%20 dated/2017-04-02/2017.03.30_agrium-wsrw.pdf 2017%20-%20March%2031st%202017).pdf See also WSRW to Agrium, 21.03.2017, 16 Swedish National AP Funds, Ethical Council, Annual Report 2014, 9 dated/2017-04-02/2017.03.21_wsrw-agrium.pdf April 2015,- 43 Paradeep Phosphate Limited, About us,. ENG-ver2.pdf paradeepphosphates.com/index.php/page-news-choice-content_ 17 Council on Ethics, Norwegian Government Pension Fund, show-title-about_us Recommendation 26 September 2014 to exclude Innophos Holdings 44 Paradeep Phosphate Limited, History, Inc.,- Innophos_Sept-2014_ENGLISH.pdf content_show-title-history. 18 Shelley, T. (2004), Endgame in the Western Sahara. See also OCP Group, International involvement,. 19 Hodges, T. (1983), Western Sahara: The Roots of a Desert War. ma/groupe/joint-ventures/international-involvement 20 France Libertés, January 2003, Report: International Mission of Zuari Agro Chemicals, Investigation in Western Sahara. 45 Paradeep Phosphate Limited, Shareholders, 21 OCP SA, OCP Inaugural bond issue to the amount of 1.85 billion US- dollars in two parts with a maturity of 10 years and 30 years, content_show-title-share 46 Business Maps of India, Paradeep Phosphates Limited (PPL), OCP_1_55_milliards_dollars_06052014_vFR_2_EN-GB.pdf 22 Business Wire, 15.04.2015, OCP successfully prices a US $1 billion 47 WSRW, 04.03.2015, Paradeep Phosphates with suspicious purchase in offering with a 10.5 year maturity at a 4.5% coupon,. 2011/2012, businesswire.com/news/home/20150415006850/en/OCP-Successfully- 48 WSRW to Paradeep, 07.03.2017, Prices-1-Billion-Offering-10.5 dated/2017-04-02/2017.03.07-wsrw-paradeep.pdf 23 OCP, Annual report 2015, p. 106 and pp. 154-159. 49 PotashCorp, Phosphate Rock from Western Sahara, November 2016, 24 OCP, Phosboucraa: Investing in the Future of Phosphates in the Sahara Region, January 2013, files/filiales/document/Phosboucraa-website-en.pdf 35 50 WSRW to PotashCorp, 15.03.2017, 74 WSRW.org, 08.04.2012, No Nidera imports since 2009 into Uruguay, dated/2017-04-02/2017.03.15_wsrw-potashcorp.pdf PotashCorp to WSRW, 17.03.2017, 75 Adresseavisen, 05.02.2009, -Yara-profitt på okkupasjon, dated/2017-04-02/2017.03.17_potashcorp-wsrw.pdf 51 WSRW to Ravensdown, 08.03.2017, 76 WSRW.org, 26.08.2010, No more Mosaic phosphate imports from dated/2017-04-02/2017.03.08_wsrw-ravensdown.pdf Western Sahara, 52 Mindfull, Ballance Agri-Nutrients case study, 77 Bloomberg, 13.03.2015, Agrium Was No. 1 Buyer of Phosphate From Western Sahara, 53 Ballance Agri-Nutrients, About Ballance; timeline, articles/2015-03-13/agrium-was-no-1-buyer-of-phosphate-from- western-sahara 54 Ballance Agri-Nutrients, Annual Report 2007, 78 WSRW.org, 09.10.2008, Belgium involved in illegal phosphate trade, 55 WSRW.org, 03.07.2008, Ballance Agri-Nutrients into politics, 79 WSRW.org, 31.10.2008, BASF will not repeat Western Sahara imports, 56 See latest letter from WSRW to Ballance, 29.03.2017,. 80 Norwatch, 23.10.2009, Phasing out phosphate imports, wsrw.org/files/dated/2017-03-29/2017.03.21_wsrw-ballance.pdf 57 Ballance to WSRW, 06.05.2014, 81 WSRW.org, 24.11.2008, US law firm refuses Western Sahara dialogue, dated/2014-05-06/ballance-wsrw_06.05.2014.pdf 58 Incitec Pivot Limited, About Incitec Pivot,. 82 WSRW.org, 08.12.2011, US law firm continues pro-occupation lobby, com.au/about-us/about-incitec-pivot-limited/company-profile 59 Incitec Pivot Limited, IPL Sustainability Report 2014, Products 83 WSRW.org, 06.03.2015, WSRW correspondence with DLA Piper, and Services, Sustainability/Online%20Report/Report%20Sections/Products%20 84 WSRW.org, 06.03.2015, WSRW correspondence with Palacio y and%20Services/Raw%20Materials.pdf Asociados, February 2015, 60 WSRW to Incitec Pivot, 21.03.2017, 85 Daily Mail, 14.04.2012, Myriam Clegg paid £400 an hour by mining giant dated/2017-04-02/2017.03.27_wsrw-ipl.pdf accused of trampling on rights of Saharan tribesmen,. 61 Verslo žinios, 03.10.2016, Prašymai veltui: Lifosa“ vėl perka žaliavas iš dailymail.co.uk/news/article-2129900/Miriam-Clegg-paid-400-hour- okupuotos Vakarų Sacharos, mining-giant-accused-trampling-rights-Saharan-tribesmen.html prasymai-veltui-lifosa-vel-perka-zaliavas-is-okupuotos-vakaru- 86 WSRW to Dechert, 08.02.2016, sacharos dated/2017-03-29/2016.02.08_wsrw-dechert.pdf 62 Tripoliven C.A., The Company,. Dechert to WSRW, 11.02.2016, htm (retrieved on 31.03.2017) states that FMC Corp is a part owner dated/2017-03-29/2016.02.11_dechert-wsrw.jpg of the Joint-Venture Tripoliven together with Pequiven S.A. and 87 European Commission, 17.03.2016, Proposal for a Regulation of the Valquimica S.A. FMC Corp confirmed in an email to WSRW on European Parliament and of the Council laying down rules on the 30.03.2017 that its shares in Tripoliven were sold on 04.11.2016 to making available on the market of CE marked fertilizing products Banco del Caribe and amending Regulations (EC) No 1069/2009 and (EC) No 1107/2009, 63 Armando.info, 10.08.2014, Venezuela hace lo contrario de lo- que dice en el Sáhara Occidental, EN-F1-1.PDF historias/6071=venezuela-hace-lo-contrario-de-lo-que-dice-en-el- 88 European Parliament Policy Department, Economic and Scientific sahara-occidental Policy, February 2017, Scientific aspects underlying the regulatory 64 WSRW.org, 11.01.2013, FMC: “Neither we nor our subsidiaries import framework in the area of fertilizers – state of play and future from Western Sahara, reforms, 65 WSRW to Monomeros, 27.03.2017, IDAN/2016/595354/IPOL_IDA(2016)595354_EN.pdf dated/2017-04-02/2017.03.27_wsrw-monomeros.pdf 89 TelQuel, 10.10.2016, Union européenne: menace sur les phosphates 66 WSRW to Innophos Holdings 25.03.2017, marocains,- dated/2017-03-25/2017.25.03_wsrw-innophos.pdf menace-les-phosphates-marocains_1517842 67 WSRW.org, Norway ethical council recommends exclusion of 90 WSRW, 10.03.2017, Morocco lobbies for toxic metals in EU agriculture, Innophos, 04.02.2015, 68 Fonds de Compensation commun au régime général de pension, 91 Africa Intelligence, 23.03.2017, OCP prepares cadmium offensive FDC Exclusion list, in Brussels,- exclusion_20161130.pdf circles/2017/03/23/ocp-prepares-cadmium-offensive-in- Danske Bank, Excluded Companies, brussels,108227133-ART en-uk/CSR/business/SRI/Pages/exclusionlist.aspx 92 See press releases of MACP, which end with the disclaimer: “This 69 Agropolychim, Who we are, material is distributed by DJE, Inc. and the Moroccan-American company/who-we-are/ Center for Policy on behalf of the Government of Morocco. Additional 70 WSRW to Agropolychim, 07.10.2008, information is available at the Department of Justice in Washington, 71 Narodno Delo, 10.01.2009, Африканци топят „Агрополихим“ в DC.” DJE stands for Daniel J. Edelman. See e.g.. подкрепа на окупационен режим,. prnewswire.com/news-releases/morocco-pursues-a-4th-round-of- php?n=27023&c=4 peaceful-negotiations-despite-polisario-stalling-56929402.html 72 WSRW.org, 21.10.2013, Impact Fertilisers halts phosphate imports from occupied Sahara, 73 WSRW.org, 05.07.2010, WSRW requests answers from Uruguayan importers, 36 Annex: Shipments in 2016 Vessel Name Departure Destination Arrival Vessel Details Filia Grace 06/01/2016 Puerto Cabello, Venezuela 19/01/2016 Panama (Unknown/Venezuelan govnt) IMO # 9125229 MMSI 351372000 26,412 DWT Maratha Promise 10/01/2016 Napier/Christchurch, New 18/02/2016 Marshall Islands Zealand IMO # 9422809 (Ravensdown Fertiliser Co-op MMSI 538004641 Ltd) 37,187 DWT Zeus I 23/02/2016 Barranquilla, Colombia 06/03/2016 Panama (Monomeros S.A.) IMO # 9467885 MMSI 354962000 27,000 DWT Star of Abu Dhabi 07/03/2016 Geismar, United States 26/03/2016 Panama (PotashCorp Inc) IMO # 9375927 MMSI 351674000 81,426 DWT Doric Samurai 24/03/2016 Vancouver, Canada 11/05/2016 Panama (Agrium Inc) IMO # 9425899 MMSI 370534000 58,091 DWT Zagora 18/04/2016 Paradip, India 27/05/2016 Greece (Paradeep Phosphates Ltd) IMO # 9235878 MMSI 240236000 73,435 DWT Ultra Rocanville 22/04/2016 Vancouver, Canada 04/06/2016 Panama (Agrium Inc) IMO # 9476965 MMSI 373043000 61,683 DWT Vipha Naree 24/04/2016 Geelong, Australia 03/06/2016 Singapore (Incitec Pivot Ltd) IMO # 9722027 MMSI 566167000 38,550 DWT Hanton Trader 1 15/05/2016 Vancouver, Canada 22/06/2016 Phillipines (Agrium Inc) IMO # 9691412 MMSI 548883000 63,518 DWT Marto 30/05/2016 Geismar, United States 15/06/2016 Marshall Islands (PotashCorp Inc) IMO # 9216224 MMSI 538005195 74,470 DWT Summer Lady 06/06/2016 Paradip, India 09/07/2016 Malta (Paradeep Phosphates Ltd) IMO # 9184938 MMSI 229564000 72,083 DWT Arosa 09/06/2016 Barranquilla, Colombia 26/06/2016 Switzerland (Monomeros S.A.) IMO # 9229879 MMSI 269689000 20,001 DWT Vessel Name Departure Destination Arrival Vessel Details Amis Champion 15/06/2016 Vancouver, Canada 22/07/2016 Panama (Agrium Inc) IMO # 9636369 MMSI 357887000 60,830 DWT Molly Manx 18/06/2016 Napier, New Zealand 09/08/2016 U.K. (Ravensdown Co-op Ltd.) IMO # 9425863 MMSI 235105197 57,892 DWT Federal Tweed 20/06/2016 Vancouver, Canada 04/08/2016 Marshall Islands (Agrium Inc) IMO # 9658898 MMSI 5380004749 55,317 DWT Navios Vega 29/06/2016 Tauranga, New Zealand 13/08/2016 Malta (Ballance Agri-Nutrients Ltd) IMO # 9403102 MMSI 249663000 58,792 DWT Symphony 06/07/2016 Puerto Cabello, Venezuela 19/06/2016 Liberia (Unknown/Venezuelan govnt) IMO # 9113381 MMSI 636016442 24,483 DWT Serendipity 20/07/2016 Paradip, India 31/06/2016 Marshall Islands (Paradeep Phosphates Ltd) IMO # 9438030 MMSI 538005500 53,800 DWT Ultra Saskatoon 06/08/2016 Vancouver, Canada 10/09/2016 Panama (Agrium Inc) IMO # 9448229 MMSI 373483000 61,470 DWT Leo 09/08/2016 Tauranga, New Zealand 22/09/2016 Marshall Islands (Ballance Agri-Nutrients Ltd) IMO # 9594638 MMSI 538004332 56,581 DWT Xing Rong Hai 19/08/2016 Portland/Geelong, Australia 20/09/2016 Hong Kong (Incitec Pivot Ltd) Portland, IMO # 9725392 25.09.2016 MMSI 477347300 Geelong 38,904 DWT Megalon 19/08/2016 Barranquilla, Colombia 06/09/2016 Panama (Monomeros S.A.) IMO # 9413066 MMSI 372427000 18,917 DWT Ince Berlerbeyi 22/08/2016 Paradip, India 04/10/2016 Turkey (Paradeep Phosphates Ltd) IMO # 9599767 MMSI 271042993 61,429 DWT Ultra Daniela 16/08/2016 Vancouver, Canada 02/10/2016 Liberia (Agrium Inc) IMO # 9731705 MMSI 636092630 61,288 DWT Shandong Chong Wen 07/09/2016 Geismar, United States 25/09/2016 Hong Kong (PotashCorp Inc) IMO # 9592032 MMSI 477434600 76,098 DWT Vessel Name Departure Destination Arrival Vessel Details Ultramer 16/09/2016 Vancouver, Canada 21/10/2016 Liberia (Agrium Inc) IMO # 9705976 MMSI 636016489 63,166 DWT SBI Flamenco 25/09/2016 Klaipeda, Lithuania 07/10/2016 Marshall Islands (Lifosa AB) IMO # 9710579 MMSI 5380066022 81,800 DWT Jing Lu Hai 01/10/2016 Geismar, United States 18/10/2016 Hong Kong (PotashCorp Inc) IMO # 9747558 MMSI 477301100 77,927 DWT Albatross 02/10/2016 Puerto Cabello, Venezuela 17/10/2016 Panama (Unknown/Venezuelan govnt) IMO # 9427574 MMSI 352707000 25,028 DWT Topflight 05/10/2016 Napier, New Zealand 26/11/2016 Panama (Ravensdown Co-op Ltd.) IMO # 9278882 MMSI 371316000 52,544 DWT Tubarao 03/11/2016 Portland/Geelong, Australia Portland Bahamas (Incitec Pivot Ltd.) 12/12/2016, IMO # 9346160 Geelong MMSI 31102880 16/12/2016 53,350 DWT Ultra Lanigan 09/11/2016 Vancouver, Canada 12/12/2016 Panama (Agrium Inc) IMO # 9520596 MMSI 373949000 58,032 DWT Ultra Integrity 14/11/2016 Vancouver, Canada 26/12/2016 Marshall Islands (Agrium Inc) IMO # 97408083 MMSI 538006751 61,181 DWT Kang Hing 10/11/2016 Paradip, India 20/12/2016 Hong Kong (Paradeep Phosphates Ltd) IMO # 9240823 MMSI 477022000 52,828 DWT Sophiana 15/11/2016 Tauranga, New Zealand 26/12/2016 Marshall Islands (Ballance Agri-Nutrients) IMO # 9738454 MMSI 538006303 59,985 DWT Mykali 15/12/2016 Napier, New Zealand 27/01/2017 Bahamas (Ravensdown Co-op Ltd.) IMO # 9503811 MMSI 311055700 56,132 DWT Tai Harvest 23/12/2016 Paradip, India 22/01/2017 Panama (Paradeep Phosphates Ltd) IMO # 9233428 MMSI 351143000 51,008 DWT .” International Court of Justice, 16 Oct 1975 ISBN (print) 978-82-93425-15-1 ISBN (digital) 978-82-93425-16-8
https://fr.scribd.com/document/346281176/P-for-Plunder-2016
CC-MAIN-2018-05
refinedweb
13,130
53.41
hello , today i suddenly think about creating an application that has limited uses . i mean after using an application for 30 days , it should not start the application . How can i do that ? i can decide if the day is the last day or not(i mean using some date comparision functions i can decide if the day is 30th day) but then after how to code for trial expiration ? Edited 2 Years Ago by Learner010 it is database application ? if yes then store data in encrypted format and on load of you master form check the date. You can also write date value in windows registry. here is the link Click Here . you can also save the date value in .ini file after encryption. I'd probably go with a simple licensing mechanism: using System; using System.Globalization; namespace BasicLicense { public class License { public string ProductCode { get; set; } public DateTime ExpiresOn { get; set; } public bool IsValid { get; set; } public License(string licenseCode, string productCode, string registeredTo) { ExtractLicense(licenseCode, productCode, registeredTo); } private void ExtractLicense(string licenseCode, string productCode, string registeredTo) { IsValid = false; try { string licenseData; using (var crypto = new Decryption(registeredTo, "ALLPURPOSELICENSE")) { licenseData = crypto.Decrypt(licenseCode); } var parts = licenseData.Split('|'); // License format: PRODUCT_CODE|EXPIRES_ON|"LIC" if (parts.Length == 3 && parts[2] == "LIC") { ProductCode = parts[0]; ExpiresOn = DateTime.ParseExact(parts[1], "yymmdd", null, DateTimeStyles.None); if (ProductCode == productCode && ExpiresOn >= DateTime.Today) { IsValid = true; } } } catch { // Ignore the exception, the license is invalid } } } } By default the application would be installed with a demo license where the expiration date is set 30 days after the time of installation. Once the trial is converted to a permanent license, the expiration date can be set to DateTime.MaxValue. Easy peasy. Edit: Whoops, thought this was the C# forum. I'm too lazy to convert the code, but it's relatively simple. ;) Edited 2 Years Ago by deceptikon Public day As String Public month As String Public year As String day = My.Computer.Clock.LocalTime.Day month = My.Computer.Clock.LocalTime.Month year = My.Computer.Clock.LocalTime.Year If My.Settings.checked = False Then My.Settings.day = day My.Settings.month = month My.Settings.year = year My.Settings.checked = True Else If year = My.Settings.year Then If month = My.Settings.month Then Else If month = My.Settings.month + 1 Then If day = My.Settings.day Then MsgBox("Trial is over") Else If day > My.Settings.day Then MsgBox("Trial is over") End If End If Else If month <> My.Settings.month + 1 Then MsgBox("Trial is over") End If End If End If Else MsgBox("Trial is over") End If End If End Sub End Class if you got the proper solution then please mark the thread as solved Thank you - Deep Modi the above code is not working . My IDE does not understand the statement at line 8. Ok, but I want say that the code is correct, the reason is that placing code in different place. don't worry i will gave example program (this may take time, as i have to create it) Can i know where you want to save the serial (activation) ? I mean using registry or you want to save in program setting. If you want to learn then you should use in program setting, or if you want to publish program then use registry like IDM, reg/.../program name/activation reason: You can do this with installer directly... What the program: if you are using trial program, then program will show date expired else it will continue as trial i am not much familiar with registry and therefore i think that going with program setting will be good enough for me to understand the code. i want to create trial of an application which ask for activating the application upto 30 days . After 30 days application should not start. Or if user enters the correct serial number then it stop asking for serial number(it will be like activated version) hope i explain in clear words. ok, no problem dude I will create the tutorial program on this for you Edited 2 Years Ago by Deep Modi As you toldd that you are having the error on the line 8, but it's all right, nothing there something wrong, I do same thing, copy and paste. Yes I create the setting: day, momth, year as string and checked as boolean and set the value as false... → here may your problem arise: nothing so... and it working too Edited 2 Years Ago by Deep Modi hey, Check the full prohect here: Remember i add the seril activation, and the serial is: 1234-5678-90 If trial is over, the page with buy program will be active, before starting the program buy or use trial form will be seen and more and more. (and please also feedback me) Edited 2 Years Ago by Deep Modi Project File: As My trial Form app inthe previous zip tell you how to create trial form. But as we talk and as per their drawback in that procedure I attached here new procedure, This procedure is used by IDM or many other. In this Zip, I just explain how to use of registry in the VB.Net ( As you told me in PM/shoutbox about this ) The rest procedure of days and months must be complete by you. for any help, msg me Edited 2 Years Ago by Deep Modi
https://www.daniweb.com/programming/software-development/threads/470849/how-to-create-trial
CC-MAIN-2016-50
refinedweb
905
64.51
Stream file changes How to get a continuously monitoring stream on a File? It seems like File.openRead's stream stops at the end of the file's current state's last byte. Adding more data to the file no longer triggers the stream's onData. How to keep the file stream open and manually cancel the subscription? See also questions close to this topic - Automating File Moves using Power Shell How do you automate file moves using PS script (without scheduler). I would like to move the file from local to a network/drive - File access denied with avast antivirus I'm working in my system actualization and it works perfectly but for some reason i have problems in pc's with avast antivirus, i tried to uninstall the antivirus o just desable the protection but does not work. i don't know If someone have experience with this problem i appreciate so much any help. procedure TMAINFORM.CopyFile(const AFrom, ATo: String); var FromF, ToF: file of byte; Buffer: array[0..4096] of char; NumRead: integer; FileLength: longint; t1, t2: tDatetime; maxi: integer; begin terminado := false; AssignFile(FromF, AFrom); reset(FromF); AssignFile(ToF, ATo); rewrite(ToF); FileLength := FileSize(FromF); updForm.show; with updform.ProgressBar1 do begin Min := 0; Max := FileLength; updform.TotalLabel.Text := formatfloat('0.00 MB',Max/1024/1024); maxi := trunc(Max / 4096); while FileLength > 0 do begin BlockRead(FromF, Buffer[0], SizeOf(Buffer), NumRead); FileLength := FileLength - NumRead; BlockWrite(ToF, Buffer[0], NumRead); Min := Min + 1; updform.downLabel.Text := formatfloat('0.00 MB', value/1024/1024); Application.ProcessMessages; value := value + NumRead; end; CloseFile(FromF); CloseFile(ToF); end; terminado := true; end; - Creating a directory and storing a javascript file in drupal 7 I am facing confusion in creating a directory. I am trying to install a module in drupal 7 with the name - View Slideshow. It requires some prior installations. Here is the website that gives the instructions -> Here it is written that -> Create a directory within sites/all/libraries named jquery.cycle and save the jQuery.Cycle JavaScript file there. So does it mean i need to create a folder with the name jquery.cycle and store the code of jQuery.Cycle JavaScript file in the jquery.cycle folder with the same name as jquery.cycle ? It would become sites/all/libraries/jquery.cycle/jquery.cycle ? Please help . I don't understand what will be the name of the folder and what will be the name of the file inside that folder ? - Reading data on server from multiple clients in C# I have been successful able to establish a connection from 2 client computers to a third server computer. I want to read the string from multiple clients without having to duplicate the Listenersmethod for each client. How can I modify the below server code to read strings from multiple clients? Is the problem only having one string variable theStringin the server code? static void Listeners(object state) { TcpListener listener = state as TcpListener; using (Socket socketForClient = listener.AcceptSocket()) { if (socketForClient.Connected) { Console.WriteLine("Client:" + socketForClient.RemoteEndPoint + " now connected to server."); using (NetworkStream networkStream = new NetworkStream(socketForClient)) using (System.IO.StreamWriter streamWriter = new System.IO.StreamWriter(networkStream)) using (System.IO.StreamReader streamReader = new System.IO.StreamReader(networkStream)) { try { while (true) { string theString = streamReader.ReadLine(); if (string.IsNullOrEmpty(theString) == false) { Console.WriteLine("Kinect1:" + theString); } } } catch (Exception ex) { Console.WriteLine("Error: " + ex.Message); } } } } Console.WriteLine("Press any key to exit from server program"); Console.ReadKey(); } public static void Main() { TcpListener[] listeners = { new TcpListener(15), new TcpListener(10) }; Console.WriteLine("************This is Server program************"); Console.WriteLine("How many clients are going to connect to this server?:"); int numberOfClientsYouNeedToConnect =int.Parse( Console.ReadLine()); foreach (TcpListener listener in listeners) { listener.Start(); for (int i = 0; i < numberOfClientsYouNeedToConnect; i++) { Thread newThread = new Thread(new ParameterizedThreadStart(Listeners)); newThread.Start(listener); } } - Deserialize Json Stream I work with this message for example and I push it with Json POST message: [{ " }] }] I have this code who work properly for deserializing Json message when I using string variable : string jsonString = "[{\\"}]}]"; List<KMessage> kMsg = JsonConvert.DeserializeObject<List<KMessage>>(jsonString); foreach (Parklist item in kMsg[0].Parklist) { Console.WriteLine("Id: " + item.ParkId + " State: " + item.ParkState + " DateTime: " + item.ParkDateTime); } I would like to do same things with my stream: [ServiceContract] public interface IKService { [DataContractFormat] [OperationContract] [WebInvoke( Method = "POST", BodyStyle = WebMessageBodyStyle.Bare, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json, UriTemplate = "/state")] void Service(Stream KStream); } I try many method, but doesn’t work (ParkList is empty again). For example : using (StreamReader reader = new StreamReader(KStream)) using (JsonTextReader jsonReader = new JsonTextReader(reader)) { JsonSerializer ser = new JsonSerializer(); List<KMessage> kMsg = ser.Deserialize<List<KMessage>>(jsonReader); foreach (Parklist item in kMsg[0].Parklist) { Console.WriteLine("Id: " + item.ParkId + " State: " + item.ParkState + " DateTime: " + item.ParkDateTime); } } For information, I can't use "DataContractJsonSerializer" because deserializing list doesn't work for me (deserializing Json data which contain an array in c#) Thank you very much for your help - Transform list of Objects to the list of strings using streams in Java I declared a class public class Entity{ String id; public Entity(String id){ this.id = id; } } Now I have a list of Entity objects. List<Entity> entities = Arrays.asList(new Entity("sd1"), new Entity("sd2")); Is it possible to transform entitiesto the list of strings with their ids using streams? So the result list should contain 2 string values sd1and sd2 - Flutter - Space Evenly in Column does not work as expected Screenshot of the problem I am trying to put elements in the red box by having even spaces between each other. (Like giving weight 1 to each of them). To do so, I put "mainAxisAlignment: MainAxisAlignment.spaceEvenly" in parent Column but it is not working. Container( margin: EdgeInsets.only(bottom: 8.0), child: Row( crossAxisAlignment: CrossAxisAlignment.start, children: <Widget>[ CachedNetworkImage( imageUrl: "{movieDetails .posterPath}", height: 120.0, fit: BoxFit.fill), Expanded( child: Container( margin: EdgeInsets.only(left: 8.0), child: Column( mainAxisAlignment: MainAxisAlignment.spaceEvenly, //NOT WORKING! children: <Widget>[ generateRow( Strings.released, movieDetails.releaseDate), generateRow(Strings.runtime, movieDetails.runtime.toString()), generateRow(Strings.budget, movieDetails.budget.toString()) ], ))) ], )), This is the code for movie image and the red box. I have created a Row which has 2 elements. First one is that image, second one is for red box. How can i solve this problem? - Openstreetmap in Flutter? Can we use Openstreetmap in Flutter or can we only use Google Maps? I wanted to get another way to display a Map. Cause when using googlemaps api key they need to know a Credit Card and i dont have one. - Dart: efficiently rebuilding tree structures I have an architectural problem. I'm building a computer algebra system in Dart (though the language is largely irrelevant) and want immutable expression trees. BuiltValue seems like the perfect base to start from, but I'm pondering the best way of structuring the builder. Use case: given an expression tree and some manipulation, construct the manipulated expression tree efficiently. Examples: // 2 + 3 -> 5 Sum([Int(2), Int(3)]).simplify() == Int(5) // (x + y)^2 -> x^2 + 2*x*y + y^2 Power(Sum([Symbol('x'), Symbol('y')]), Int(2)).expand() == Sum(...) Most manipulations will be the result of multiple chained manipulations, and the more I can avoid rebuilding the expressions at each step the better. Sometimes this won't be possible - e.g. after duplications. Naively I could create a separate builder for each expression type - IntBuilder, SumBuilderetc. - but during these manipulations the root type can change. Things I've considered: - per-class builders - though I'm unsure how to change root-type changes. Simplifications like Sum([x]).simplify() == x(or the first example above) wouldn't be too hard to deal with after rebuild, but I'm not sure how they'd work with examples like the second above. - a single ExpressionBuilder that tracks operands and some enum-like object identifying the resulting subclass. Am I missing something really obvious?
http://quabr.com/48758840/stream-file-changes
CC-MAIN-2018-34
refinedweb
1,314
51.65
This chapter will introduce pointers. Pointers are very important when you undertake C programming. Pointers can be daemons and angels according to the usage in C programming. In other words, if you are expert in handling pointers that means you are unbeatable and if you fail to use pointers the way they are meant to be then pointers can be very dangerous. Last chapter was dedicated to arrays. Memory of computer is organized as contagious locations in array. Every location has an identification number and this identification number is known as Address. Now, pointers hold the address. So a pointer can be rightly defined as a placeholder/variable which contains memory location of the variable rather than its content. Pointer is a derived data type. If size of RAM is 64K then pointer can take value from 0 to 65535. In other words, pointers point to variables and those variables can be: - int, float, char, double, void (basic data types) - function - array - structures - unions You must be thinking that the concept which is apparently a daemon if not used correctly then why we should read this concept? So the answer is it advantages so let us have a glance on the advantages of pointers and they are: - Pointers can point to various data structures - Since pointers handle addresses directly so the manipulation of data at various locations of memory is easy - When pointers are used in proper way they improves the simplicity of program - Coding is more efficient and compact when we use pointers - With the use of pointers we can return multiple results from same function - Pointers makes dynamic memory allocation possible 14.1 Pointer Constants, Pointer Values and Pointer Variables Pointers have three major concepts and they are pointer constants, pointer values and pointer variables 14.1.1 Pointer Constants Computer stores all the information in its memory like we do. Locations in memory are limited and memory is divided into smaller units called Storage Cells. Each storage cell can hold information of one byte. Each storage cell/location has an associated address. Now, 1K equals to 1024 bytes. Number of memory locations would be 64K. We will have 64 x 1K which equals to 64 x 1024 bytes = 65536 locations (if assumed capacity of memory is 64K). Don't worry these many memory locations are organized physically as even and odd bank respectively but logically they are numbered from 0 to 65535. As the name suggests, odd bank contains all memory locations with odd addresses and even bank contains even addresses respectively. Figure - Even and odd bank Memory addresses cannot be changed. As a programmer, you can only use them to store values of data and these memory addresses are known as Pointer Constants. 14.1.2 Pointer Values Pointer values are assigned by system. We have 65536 Pointer constants in a 64K memory. System assigns these memory addresses to variables and these memory addresses which are assigned to variables by system are called Pointer Values. Example: int x=50, y=60; Explanation: In the above declaration, system is instructed to reserve two integer values in memory. Then associate variables x and y to these memory locations and finally store 50 and 60 in locations x and y respectively. In the memory map it would look as follows: Figure - C Pointer Values Now address of variable can be accessed and this is possible by using (&) address operator. Now there is a thumb of rule in the usage of address operator. It can be used with only those variety of variables which are valid when they are placed on the left side of assignment operator. So in short, we cannot use address operator with constant, expression and names of array. Now constant and expression can be understood what about array? Array is an user defined data type which has similar set of data items and to point to an array we have a way. For example we have array named arr, then we can address first location of array which can be obtained using following expression:&arr[0] or arr Let us consider that 65532 and 65534 are the memory locations assigned to variables x and y respectively. And these addresses can be retrieved by using address operator (&) and in the above scenario it would be &x and &y respectively. Pointer vales are not constant vales. They may be different after each execution of program. 14.1.3 Pointer Variables Pointer variable is defined as a variable which holds address of different variable. In terms of pointer values, pointer variable hold the address of pointer value and as you know pointer value is the address of another variable. Example: int x=50; and p=&x; Explanation: in the above scenario, p is the pointer variable and address of variable x is stored in p. Here, address of variable x (65534) is assigned to variable p. Now the address of variable p itself is 60000 but the value stored in this variable is the address of variable x which is 65534. Variable p is our pointer variable. And variable p points to x thus called pointer. Pointer variable can only hold the address of other variable. We have two operators used with pointers and they are: 1.Ampersand or Address operator (&) 2.Asterisk or Indirection operator (*) 14.2 Pointers and accessing variables We have a series of steps which are required to access variables via pointers and they are as follows: 1.A data variable has to be declared before using it 2.Pointer variable has to be declared 3.Pointer variable need to be initialized 4.Data can be accessed via pointer variable 14.2.1 Declaring and defining pointer We need to declare pointer variable before using it. Format is as follows: data_type * identifier ; Explanation: Here, data_type is the data type which can be either basic (int,float,char,double), derived or user-defined. identifier represents the name of pointer variable * represents that identifier is a pointer variable Next, step is to initialize the pointer variable. Here first we will have to declare a data variable followed by declaring pointer variable and then we initialize the pointer variable. In other words, when we talk about initialization, we mean associating address of an variable to pointer variable. Step1: int y; //declaring data variable Step 2: int *p; //declaring pointer variable Step 3: p=&y; //initializing pointer variable One pointer can point to more than one memory location so it improves flexibility of programming. Let us consider the following program snippet. Example: int i=12, j=14, k=16, l=18; int *p; p=&i; // other statements p=&j; // other statements p=&k; // other statements p=&l; // other statements C Logical Representation Explanation: In the above scenario, we can see same pointer variable p is pointing to four different memory locations. Hence we can say same pointer can be used to point different data variables.: # include <stdio.h> int *q=NULL; Here, pointer q is initialized to NULL ('\0' or 0) and it can be used by programmer to access data variable if and only if it does not contain NULL. Suppose you declare a pointer variable and fail to initialize it or in other words which does not have a valid address then this type of pointer is called Dangling pointer. Memory location will be allocated but as it was not initialized then this type of pointer will result in garbage value in memory. Dangling pointers should be avoided as these errors are very difficult to debug when you deal with lot of pointers or pointers in large and/or complex programs. We can also access data of a variable using pointers. Asterisk (*) operator is used for this purpose. Consider the following snippet: Example: int y=50; int z; int *p; p=&y; z=*p;. Now if you remember, we saw one pointer can point to different variables, similarly, one variable can be pointed by different variables. Do not worry! Let us consider the following snippet: Explanation: In the above scenario, we have three pointer variables namely x, y and z which are of type integer. We have a data variable r which is also of type integer is initialed to 50. Pointers x, y and z point to same address of variable r respectively. Let us consider a simple program illustrating the usage of pointers so that we can actually see the practical implementation of pointers. /* Program to illustrate pointers*/ # include <stdio.h> # include <conio.h> # include <stdlib.h> int * largestnumber (int *x, int *y) { if(*x > *y) return x; else return y; } void main() { clrscr(); int i, j, *large; printf("Please enter two values\n"); scanf("%d%d", &i,&j); large=largestnumber(&i,&j); printf("Largest of two numbers (%d,%d)=%d\n", i,j,*large); getch(); } Figure – C Program to illustrate pointers Figure - After compiling C Program to illustrate pointers Figure - Output of C Program to illustrate pointers Folks!! We are concluding this chapter with this. Next chapter is dedicated to strings. Thank you.Folks!! We are concluding this chapter with this. Next chapter is dedicated to strings. Thank you.
http://wideskills.com/c-tutorial/14-c-pointers
CC-MAIN-2021-10
refinedweb
1,525
54.22
Class:. This documentation is admittedly sparse on details, as time permits I will try to improve them. For now, I is a set of abstractions of the components of an object system (typically things like; classes, object, methods, object attributes, etc.). These abstractions can then be used to both. Explict MOPs however as less common, and depending on the language can vary from restrictive (Reflection in Java or C#) to wide open (CLOS is a perfect example). This is not a class builder so much as it is a class builder builder. My intent is that an end user does not use this module directly, but instead this module is used by module authors to build extensions and features onto the Perl 5 object system. This module is specifically for anyone who has ever created or wanted to create a module for the Class:: namespace. The tools which this module will provide will hopefully make it easier to do more complex things with Perl 5 classes by removing such barriers as the need to hack the symbol tables, or understand the fine details of method dispatch. This module was designed to be as unintrusive as possible. Many of its features are accessible without any change to your existsing code at all. It is meant to be a compliment to your existing code and not an intrusion on your code base. Unlike many other Class:: modules, this module does not require you subclass it, or even that you use it in within your module's package. The only features which requires explict MOPs are performance drains. But this is not a universal truth at all, it is an side-effect of specific implementations. For instance, using Java reflection is much slower it's class's ancestors. Downward metaclass compatibility means that the metaclasses of a given class's anscestors are all either the same as (or a subclass of) that metaclass. Here is a diagram showing a set of two classes ( A and B) and two metaclasses ( Meta::A and Meta::B) which have correct metaclass compatibility both upwards and downwards. +---------+ +---------+ | Meta::A |<----| Meta::B | <....... (instance of ) +---------+ +---------+ <------- (inherits from) ^ ^ : : +---------+ +---------+ | A |<----| B | +---------+ +---------+ accidentely create an incorrect type of metaclass for you. This is a very rare problem, and one which can only occur if you are doing deep metaclass programming. So in other words, don't worry about it. The protocol is divided into 4 main sub-protocols: This provides a means of manipulating and introspecting a Perl 5 class. It handles all of symbol table hacking for you, and provides a rich set of methods that go beyond simple package introspection. See Class::MOP::Class for more details. This provides a consistent represenation for an attribute of a Perl 5 class. Since there are so many ways to create and handle attributes in Perl 5 OO, this attempts to provide as much of a unified approach as possible, while giving the freedom and flexibility to subclass for specialization. ref to other types of references. Several examples are provided in the examples/ directory included in this distribution. See Class::MOP::Instance for more details. We set this constant depending on what version perl we are on, this allows us to take advantage of new 5.10 features and stay backwards compat. This will load a given $class_name and if it does not have an already initialized metaclass, then it will intialize one for it. This function can be used in place of tricks like eval "use $module" or using require. This will return a boolean depending on if the $class_name has been loaded. NOTE: This does a basic check of the symbol table to try and determine as best it can if the $class_name is loaded, it is probably correct about 99% of the time. This will return an integer that is managed by Class::MOP::Class to determine if a module's symbol table has been altered. In Perl 5.10 or greater, this flag is package specific. However in versions prior to 5.10, this will use the PL_sub_generation variable which is not package specific. This function returns two values, the name of the package the $code is from and the name of the $code itself. This is used by several elements of the MOP to detemine where a given $code reference is from. NOTE: DO NOT USE THIS FUNCTION, IT IS FOR INTERNAL USE ONLY! If possible, we will load the Sub::Name module and this will function as Sub::Name::subname does, otherwise it will just return the $code argument. Class::MOP holds a cache of metaclasses, the following are functions (not methods) which can be used to access that cache. It is not recommended that you mess with this, bad things could happen. But if you are brave and willing to risk it, go for it. This will return an hash of all the metaclass instances that have been cached by Class::MOP::Class keyed by the package name. This will return an array of all the metaclass instances that have been cached by Class::MOP::Class. This will return an array of all the metaclass names that have been cached by Class::MOP::Class. This will return a cached Class::MOP::Class instance of nothing if no metaclass exist by that $name. This will store a metaclass in the cache at the supplied $key. In rare cases it is desireable to store a weakened reference in the metaclass cache. This function will weaken the reference to the metaclass stored in $name. This will return true of there exists a metaclass stored in the $name key and return false otherwise. This will remove a the metaclass stored in the $name key. it's philosophy and the MOP it creates are very different from this modules. All complex software has bugs lurking in it, and this module is no exception. If you find a bug please either email me, or add the bug to cpan-RT. Thanks to Rob for actually getting the development of this module kick-started. Stevan Little <stevan@iinteractive.com> with contributions from: Brandon (blblack) Black Guillermo (groditi) Roditi Matt (mst) Trout Rob (robkinyon) Kinyon Yuval (nothingmuch) Kogman Scott (konobi) McWhirter This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~stevan/Class-MOP/lib/Class/MOP.pm
crawl-002
refinedweb
1,057
62.98
User talk:Sycamore/Archive6 From Uncyclopedia, the content-free encyclopedia Thanks for the great welcome, this site is pretty cool. I just wanted to know if the page i have created is acceptable and maybe hopefully it won't be deleted? it's located here: "Wikipedia Chase!". its my first time writing a page so hopefully its ok. when its finished i will add pictures etc, but i just wanna know before i attempt to finish it, will the page be worthy to stay up? Oh and maybe you could maybe tell me how i could fix it up abit? Thanks for ya help --The Italian Stallion 08:05, 23 October 2008 (UTC) - Yeah, but it givea an idea on things like formatting etc:)--Sycamore (Talk) 09:20, 23 October 2008 (UTC) Rape! HeeHee, this never get's boring! =P SK Sir Orian57Talk RotM 12:14 1 October 2008 - Groovy, still that was few hours... — Sir Sycamore (talk) 12:34, 1 October 2008 (UTC) - <rape><rape><Rape>You don't fool me!</rape></rape></rape> SK Sir Orian57Talk RotM 14:08 7 October 2008 - You can't rape it twice, you'll have to wait till I archive the lot... — Sir Sycamore (talk) 14:11, 7 October 2008 (UTC) - Oh and thanks for all the fickses to my article. did you like:10 7 October 2008 - Yeah its alright - its difficult to work with the Why? namespace. I saw it come up in recent changes and tided it up a little:) — Sir Sycamore (talk) 14:24, 7 October 2008 (UTC) - I actually like the Why namespace, two out of my three features have been from there. I guess it's a personal thing, cos I find the parody articles really hard to do. In why? you can just make up a story. Yeah like I said thanks for the tweaks! =):28 7 October 2008 thanks thanks for the great review, syc. this was a total rewrite that i read over so many times that the words lost all meaning, so all of your insight was extremely helpful. we will continue to differ on minor issues like TOC (this might be one where i'll add it if i can get it to look right by rearranging sections like you suggested) and image placement (you're also causing me to second guess myself on this one, as you bring up valid points to back up your argument). anyway, thanks again, and keep up the good work, you'll be on the top 5 in no time! also, great work on France! 15:12, 7 October 2008 (UTC) Ouch! Style mine writing have terrible? Ha! Anyway, thanks for your review. I don't really agree with what you said, but I respect your opinion. Thanks for reviewing. • • • Necropaxx (T) {~} 16:29, Oct 9 - Humour is subjective, not personal, sometimes the impact of the jokes is going to work differently with some than others:) — Sir Sycamore (talk) 16:43, 9 October 2008 (UTC) - Thanks for not reviewing anything of mine. ;) -- just had Jaffa cakes for the first time Yummy! ~ 11:47, 10 October 2008 (UTC) - The first time, you poor thing? Come have some of my stash:) — Sir Sycamore (talk) 11:50, 10 October 2008 (UTC) - Wait until you discover that they make other flavours..... -- hear the pinapple kind are the best... the best technique is to skim the top off with your bottom teeth in a digusting fashion. — Sir Sycamore (talk) 12:02, 10 October 2008 (UTC) UnSignpost: October 10th 2008 Now with 20% more ninjas! October 9th, 2008 • Twenty-First Issue • Bursting with Crunchy Goodness! why? Why did you undo my edit of moby? explain yourself. TELECOMEWAN 15:07, 11 October 2008 (UTC) - I had suspected vandalism, there was not much left of the article, as per your revision of it. Should you be keen to redo the article it would be far better to create User:TELECOMEWAN/Moby - You can copy the content of the old one over and re-work it, when it finished you can then paste over the current Moby article. If you would like to keep it as it is I can undo my revert and leave you to it:) — Sir Sycamore (talk) 15:13, 11 October 2008 (UTC) re:Hello. hey man thanks for the welcome. i have only been here for a bit but i can tell i like this wiki. it is very funny. - corn 17:02, 11 October 2008 (UTC) Removed How to Shoop Da Woop ICU Tag Ok, so, I pretty much went through and fixed all the spelling/grammar/funny/incoherence/ problems that the article had. I also added pictures that make sense and are pretty funny. Paragraphing is better now, and over all ,the article pretty much makes sense now. So, In Short, I feel that it was ok to remove the ICU tag. If you havea problem with this, you are welcome to read the article over again and decide whether or not it should still be in the ICU. --Kowaru 02:34, 12 October 2008 (UTC) - Preferably let an admin deal with ICUs in future. Yeah it looks alright, I moved it to: HowTo:Shoop Da Woop. Sorted:)? — Sir Sycamore (talk) 08:29, 12 October 2008 (UTC) - Ya, thanks for the move, I was gonna do it myself, but umm, I was too lazy/afraid the article would asplode. Oh, and is says in the ICU thing that you can remove the tag yourself if you see fit? Is that still allowed, or is it understood that you should just let the admin deal with it? --Kowaru 18:16, 12 October 2008 (UTC) What you said on my talk page. After 3 hours of trying to talk to you, and failing, I have no idea whether the problem with one of the sigs is the height or the length. As for the categories, I gave up clearing them when TKF blocked me. JudgeZarbi TALK 11:01, 12 October 2008 (UTC) - Err...Zarbi, I'd remove that block me part of my sig. Might entice other trigger happy admins to click it. ~ 11:09, 12 October 2008 (UTC) - The other sig is a bit massive:) - The code on mine for size is the easiest: Sycasig. With reagrds to the other stuff (you said you wanted to have go with some more of the quality control stuff?), chaeck out these, UN:WYCD and UN:PEE - the last one is really needed at the momemt, check out he guidelines, a ken chap such as yourslef will take to it like water. Have fun:) — Sir Sycamore (talk) 11:11, 12 October 2008 (UTC) - Mordillo: Right...if I have to... - Sycamore, I'll do my best at UN:PEE, but don't complain when I get banned for doing that too. I'll resize my sig too. JudgeZarbi TALK 11:19, 12 October 2008 (UTC) - I'd drop TKF a line about it. Not sure what the reason was, but must have been good. After all, the cabal does not make mistakes. Neither does it exist. ~ 11:21, 12 October 2008 (UTC) - I got banned for "using pretty bad judgement on VFD" apparently. So when I try to help, all I get is a ban. JudgeZarbi TALK 11:23, 12 October 2008 (UTC) Take it easy, remember you're just starting out, you will make mistakes. I still hash things up all the time;) I think TKF may have been "Excessive" on many of his bans lately — Sir Sycamore (talk) 11:28, 12 October 2008 (UTC) - I'll try to keep that in mind. JudgeZarbi TALK 11:35, 12 October 2008 (UTC) I've done my first pee review! What do you think? JudgeZarbi TALK 11:51, 12 October 2008 (UTC) - Top notch for a first one:), it could proababy be more "in depth" i.e. you got hrough the sections picking out any weaker points - don't be afraid to be a bit critical (but always fair), they like anything take a few to get into the swing of it if you know what I mean:) — Sir Sycamore (talk) 15:51, 12 October 2008 (UTC) - Yeah, I think I'll keep to doing one a day for now while I get used to it. JudgeZarbi TALK 20:24, 12 October 2008 (UTC) - Lookin' good:) — Sir Sycamore (talk) 17:24, 13 October 2008 (UTC) Help me, i am a n00b!!! i don't even know what this means!!:(Relgap 17:20, 13 October 2008 (UTC)) How can i get started? Help me, i am a n00b!!! i don't even know what this means!!:(Relgap 17:20, 13 October 2008 (UTC)) How can i get started? - Calm down there! - Noob just means that your a new use on the site, to get started have a look at this:D — Sir Sycamore (talk) 17:21, 13 October 2008 (UTC) Three things 1. Are you a sysop? 2. Is there a faster way to make subpages than by making a link and then creating them? 3. how do I make my signature look like TealwispBitch me out? [[User:Tealwisp|Tealwisp]]<small>[[User Talk:Tealwisp|Bitch me out]]</small> 01:09, 14 October 2008 (UTC) 1. No 2. Not that I know of - you can add a /artcile subpage title on your browser as well (especially from your userpage) 3. You need to create a subpage and set it as your sig: Firstly create User:Tealwisp/sig and add the code of you sig to the page, (remebr to save). From there go to My Preferences (at the top of your screen) and about a quater of the way down there is an empty box and a title of "signatures" at this code into the empty space {{SUBST:nosubst|user:Tealwisp/sig}}. Then click below that on Raw Signature - and click Save Preferences at the the bottom. From then on every time you click the four ~~~~ your sig will appear with time stamp:) — Sir Sycamore (talk) 07:10, 14 October 2008 (UTC) plz help me srry im just another helpless n008. Can you help me with hyperlinks? Alexander the Great 13:24, 14 October 2008 (UTC) user:Alexander the Great Hi there, There are two main kinds of linking you can do the first one is what i think you asked for: - *[ Tony Visconti] = Or just for a normal link within the site you do: - [[Tony Visconti]] = Is that OK? — Sir Sycamore (talk) 16:48, 14 October 2008 (UTC) Well, I mean how to change the title of the hyperlink while maintaning the same link? I'm still stuck...... Alexander the Great 09:49, 16 October 2008 (UTC) - mmm... Do you mean moving the article to a different title or do you mean changing the title at the top so it looks like something else, or do you mean having links that are differnt to their titles as in [[User:Sycamore|Cool]] = Cool, or do you mena soething else entirely?--Sycamore (Talk) 17:08, 16 October 2008 (UTC) BobaCartman Thanks for your Comment, Sycamore. BobaCartman 16:12, 15 October 2008 (UTC) Little Sister I'm working on the little sister article, and I think I've got something. It's incomplete, but that's because I can't decide between two possible outcomes.--WhySoSerious 21:19, 17 October 2008 (UTC) - Its quite funny already - I think it could do with an image, and you use really short sections, its an idea to use subsectiosn or a good paing of paragraphs than to have one/two sentence sections - You have not been specific about the outcomes you are thinking of, so I'll let you get back to me on that one:)--Sycamore (Talk) 08:59, 18 October 2008 (UTC) Thanks for the review, man! I went along with what you said and added an alternate ending. What do you think? --Kglee 02:06, 19 October 2008 (UTC) - Looks much better, its got the right pace thoroughout now - good stuff here:)--Sycamore (Talk) 08:02, 19 October 2008 (UTC) For your moderately fast reaction to a vandal. Sir Modusoperandi Boinc! 10:30, 19 October 2008 (UTC) - Moderate?--Sycamore (Talk) 10:31, 19 October 2008 (UTC) - Yes. You only started reverting his edits in the same minute he made them. Once you level up, you'll get psi powers that improve your speed to before he made them. By level forty, you'll be telling guards that those aren't, in fact, the droids that they are looking for...and they'll believe you. Sir Modusoperandi Boinc! 10:36, 19 October 2008 (UTC) - I have much to learn about the true nature of the force, still this black life support suit is like sooo Cyber....--Sycamore (Talk) 11:26, 19 October 2008 (UTC) - When you level up, pick Force Boogie. It's the best Force power. Force Afro is a cool power, too. Sir Modusoperandi Boinc! 11:47, 19 October 2008 (UTC) - I think you'll find Force Wedgie is the most popular Force Power....also the main reason Yoda walks with a stick) - Only the jock Jedi get Force Wedgie. Nobody here is a jock. Also, no jocks become Jedi (as they're too busy getting drunk and laid), making my first sentence a zen koan of some kind. To a real jock, comedy is a wedgie, rather than the serious business that it so clearly is. Sir Modusoperandi Boinc! 12:52, 19 October 2008 (UTC) PeeReview Yo Sycamore, thanks for not ignoring my article and reviewing it. That was a very fair review. Thanks also for nominating the Argument article for featurement - last time I looked it was going O.K. --Knucmo2 14:56, 20 October 2008 (UTC) signpost hey sycamore, any chance you can deliver the latest unsignpost? 20:27, 20 October 2008 (UTC) UnSignpost: 21 October 2008 Now with 20% more ninjas! October 16th, 2008 • Twenty-Second Issue • Now with 40% more Batman! Ok so I'm very new to this and I'm not sure I've done anything properly... I find this wiki fascinating and would like to be a part of it. I've read most of the beginner's guide stuff and I understand most. I'm afraid I'm not really computer savvy but I love to write and I like funny. The main things I'm looking for are how not to get kicked out or banned or voted off, etc. I added some things to the unstory and I hope they are funny. Sorry, but I feel like I've wasted your time here, but it would waste mine too if I deleted this so I'll just say thanks and good day.Lord Calvert 21:19, 21 October 2008 (UTC)Lord Calvert I am an ass. Lord Calvert 21:22, 21 October 2008 (UTC) - Well... I'm sure you're not as big an ass as me, so you have work to do there. As far as other stuff goes I doubt you will be banned for the reasons you are suggesting - the reasons for banning tend to be causing harm to pages or users, which you have not done. I understand that it is hard staring up here, and it took me some time to get the hang of the site. Give it it time and patience and you'll be doing alright - If theres anything else, I'm often about to help:)--Sycamore (Talk) 11:53, 22 October 2008 (UTC) - Trust me, Sycamore was a much bigger ass than you, and he used to show it all the time, what with his kilt and all...~ 14:27, 22 October 2008 (UTC) - I wax it these days to show it off...--Sycamore (Talk) 14:33, 22 October 2008 (UTC) look at this shit May I ask what the fuck you see wrong with the Lazytown article? It OWNS YOU.--68.217.167.219 18:26, 26 October 2008 (UTC) We aim to please:)--Sycamore (Talk) 19:22, 26 October 2008 (UTC)
http://uncyclopedia.wikia.com/wiki/User_talk:Sycamore/Archive6?oldid=3689544
CC-MAIN-2015-18
refinedweb
2,677
79.8
In this brief tutorial, we'll look at five different ASP.NET objects we can use to store data. Two of these, the Application and Session objects, should be pretty familiar to anyone coming from ASP. The other three, Context, Cache and ViewState are brand new to ASP.NET. Each object is ideal under certain conditions, varying only in scope (that is the length of time data in them exists, and the visibility of the data), and it's a complete understanding of this variation that we are after. Also, most of these objects have alternatives that we'll briefly look at. Application Session HashTable. In other words, getting or setting information into or out of any of them is very similar. All can hold any object as a value, and while some can have an object as a key, some can only have a string - which is what you'll be using 98% of the time. For example: HashTable //C# //setting Application.Add("smtpServer", "127.0.0.1"); Context.Items.Add("culture", new CultureInfo("en-CA")); Session.Add("userId", 3); //getting string smtpServer = (string)Application["smtpServer"]; CultureInfo c = (CultureInfo)Context.Items["culture"]; int userId = (int)Session["userId"]; 'VB 'setting Application.Add("smtpServer", "127.0.0.1") Context.Items.Add("culture", new CultureInfo("en-CA")) Session.Add("userId", 3) 'getting Dim smtpServer as string = cstr(Applciation("smtpServer")) Dim c as CultureInfo = ctype(Context.Items("culture"), CultureInfo) Dim userId as integer = cint(Session("user. System.Web.HTTPApplication Application_Start BeginRequest HttpModule While it's correct to say that data stored in the Application exists as long as the website is up, it would be incorrect to simply leave it at that. Technically, the data in the Application exists as long as the worker process (the actual aspnet.exe if you will) exists. This can have severe repercussion and isn't a mere technicality. There are a number of reasons why the ASP.NET worker process recycles itself, from touching the web.config, to being idle, or consuming too much RAM. If you use the Application. HttpCache HttpApplication It is my opinion that the usefulness of the HttpApplication class, from a data storage point of view, is greatly diminished in ASP.NET. Powerful custom configuration sections in the web.config are a far more elegant and flexible solution for read-only values. Using XML files or a database is ideal for read/write values and web farms, and when combined with the HttpCache object leaves poor'ol Application in the dust. The HttpCache (cache) class is the first new storage class we'll look at. It's also the most unique. There are a couple of reasons for this uniqueness. First, HttpCache isn't really a storage mechanism, it's mostly used as a proxy to a database or file to improve performance. Secondly, while you read values from the cache by specifying a key, you have a lot more control when inserting, such as how long to store the data, triggers to fire when the data is removed, and more. When accessing values from the cache, there's no guarantee that the data will be there (there are a number of reasons why ASP.NET would remove the data), as such, the typical way to use the cache is as follows: private DataTable GetStates() { string cacheKey = "GetStates"; //the key to get/set our cache'd data DataTable dt = Cache[cacheKey] as DataTable; if(dt == null){ dt = DataBaseProvider.GetStates(); Cache.Insert(cacheKey, dt, null, DateTime.Now.AddHours(6), TimeSpan.Zero); } return dt; } Private Function GetStates() As DataTable Dim cacheKey As String = "GetStates" 'the key to get/set our cache'd data Dim dt As DataTable = CType(Cache(cacheKey), DataTable) If dt Is Nothing Then dt = DataBaseProvider.GetStates() Cache.Insert(cacheKey, dt, Nothing, DateTime.Now.AddHours(6), TimeSpan.Zero) End If Return dt End Function First thing we do is declare a cacheKey [line: 2] which we'll use when retrieving and storing information from and into the cache. Next, we use the key and try to get the value from the cache [line: 3]. If this is the first time we called this method, or if the data has been dropped for whatever reason, we'll get null/Nothing [line: 4]. If we do get null/Nothing, we hit the database via the fictional DataBaseProvider.GetStates() call [line: 5] and insert the value into the Cache object with our cacheKey [line: 6]. When inserting, we specify no file dependencies and we want the cache to expire in six hours from now. cacheKey null Nothing DataBaseProvider.GetStates() Cache The important thing to note about the code above is that the processing-heavy data-access code DataBaseProvider.GetStates() is skipped when we find the data in the cache. In this example, the real storage mechanism is a fictional database; HttpCache simply acts as a proxy. It's more likely that you'll want to retrieve information based on parameters, say all the states/provinces for a specific country, this is easily achieved via the VB.NET code below: Private Function GetStates(ByVal countryId As Integer) As DataTable Dim cacheKey As String = "GetStates:" & countryId.ToString() 'the key to get/set our cache'd data Dim dt As DataTable = CType(Cache(cacheKey), DataTable) If dt Is Nothing Then dt = DataBaseProvider.GetStates(countyId) Cache.Insert(cacheKey, dt, Nothing, _ DateTime.Now.AddHours(6), TimeSpan.Zero) End If Return dt End Function. GetStates:3 GetStates:2: sessionState mode <system.web> <!--<span class="code-comment"> can use a mode of "Off", "InProc", "StateServer" or "SQLServer". These are CaSe-SeNsItIvE --></span> <sessionState mode="InProc" /> ... </system.web> InProc means that sessions are stored inside the ASP.NET worker process - this is pretty much how sessions in classic ASP work. Storing data this way can lead to performance issues (since it's using your web server's RAM), and also has all the issues associated with worker process recycling that plagues read/write usage of the Application object. However, wisely used for the right website, such as keeping track of a user ID in a small/medium sized site, they are extremely performing and an ideal solution. You don't need to do anything special, other than setting the sessionState's mode to "InProc".. StateServer: String int System.SerializableAttribute [Serializable()] public class User { private int userId; private string userName; private UserStatus status; public int UserId {get { return userId; } set { userId = value; }} public string UserName {get { return userName; }set { userName = value; }} public UserStatus Status {get { return status; }set { status = value; }} public enum UserStatus { Invalid = 0, Valid = 1 } } To use StateServer, you must specify both the SessionState mode, as well as the address of the StateServer via stateConnectionString. StateServer runs on port 42424 by default, so the example below connects to the state server on the local machine. SessionState stateConnectionString : SQLServer sqlConnectionString : 'VB Dim pollId As Integer If Not Page.IsPostBack Then pollId = getPollId() viewstate.Add("pollId", pollId) Else pollId = CInt(viewstate("pollId")) End If //C# int pollId; if(!Page.IsPostBack){ pollId = getPollId(); ViewState.Add("pollId", pollId); }else{ pollId = (int)ViewState["pollId"]; } In the above example, we call an expensive function called getPollId() [line: 4] and store the result in the ViewState. When the page does a postback, we can avoid that same call and simply retrieve the value from the ViewState. getPollId(). Page.RegisterHiddenField() Request.Form Personally, I think I've kept the best for last. I think the HttpContext is probably the least well known of the ASP.NET data storage objects, and while it won't solve all your problems, it's definitely useful in a number of instances. Of all the objects, it has the most limited scope - it exists for the lifetime of a single request (remember the viewstate exists for the life time of two requests, the original one and the postback). The most common use of HTTPContext is to store data into it on Application_BeginRequest, and access it as need be throughout your page, user controls, server controls and business logic. For example, say I built a portal infrastructure which, at the start of each request, created a portal object which identified which portal the request was in and which section within the portal. We'll skip all the portal code, but your Application_BeginRequest would probably look something like this: HttpContext HTTPContext Application_BeginRequest. Page UserControl Another example of using the HttpContext is for a simple performance monitor: 'VB 'Fires when the request is first made Sub Application_BeginRequest(ByVal sender As Object, ByVal e As EventArgs) Context.Items.Add("StartTime", DateTime.Now) End Sub 'fires at the end of the request Sub Application_EndRequest(ByVal sender As Object, ByVal e As EventArgs) Dim startTime As DateTime = CDate(Context.Items("StartTime")) If DateTime.Now.Subtract(startTime).TotalSeconds > 4 Then 'Log this slow request End If End Sub to CurrentUser, we need to specify the key "currentUser", and we need to add error handling. The problem is we need to do this every time we access the value. We might do it once in our Page, once in each of our UserControls, and once in some business logic. That's not very encapsulated, is it? Even changing the key from "currentUser" to "User" would require a potentially dangerous search and replace. CurrentUser currentUser User The solution is to wrap this code into a Shared/static property: Shared static Public Shared ReadOnly Property CurrentUser() As User Get Dim _currentUser As User = _ CType(HttpContext.Current.Session("CurrentUser"), User) If _currentUser Is Nothing Then _currentUser = New User HttpContext.Current.Session.Add("CurrentUser", _currentUser) End If Return _currentUser End Get End: HttpContext.Current). ViewState.Add("xxx", "yyy") Page.V. System.web.UI.Page System.Web.HttpContext.Current It's possible to access the HttpCache object via either the above mentioned method or HttpRuntime.Cache - enabling you to utilize the power of HTTPCache in non-web requests. HttpRuntime.Cache HTTPCache HttpContext). However, you can use their storage capabilities without knowing anything more about them. Always ask yourself how the data will be used. Is it specific for all users (Application, HttpCache), or a specific user (Session, ViewState, Context)? Does it have a long life-time (Application, HttpCache, Session) or a short one (ViewState, Context)?.
http://www.codeproject.com/Articles/8631/ASP-NET-s-Data-Storage-Objects?msg=1416303
CC-MAIN-2016-44
refinedweb
1,698
55.44
Pandas Complete Tutorial for Data Science in 2022 Author(s): Noro Chalise. Pandas Beginner to Advanced Guide Pandas is one of the most popular python frameworks among data scientists, data analytics to machine learning engineers. This framework is an essential tool for data loading, preprocessing, and analysis. Before learning Pandas, you must understand what is data frame? Data Frame is a two-dimensional data structure, like a 2d array, or similar to the table with rows and columns. For this article, I am using my dummy online store data set, which is located in my Kaggle account and GitHub. You can download it from both. Also, I will provide you with all this exercise notebook on my GitHub account, so feel free to use it. Before starting the article, here are the topics we covered. Table of content - Setup - Data Preprocessing - Memory Management - Data Analysis - Data Visualization - Final Thought - Reference Feel free to check out GitHub repo for this tutorial. 1. Setup Import Before moving on to learn pandas first we need to install them and import them. If you install Anaconda distributions on your local machine or using Google Colab then pandas will already be available there, otherwise, you follow this installation process from pandas official’s website. # Importing libraries import numpy as np import pandas as pd Setting Display Option Default setting of pandas display option there is a limitation of columns and rows displays. When we need to show more rows or columns then we can use set_option() the function to display a large number of rows or columns. For this function, we can set any number of rows and columns values. # we can set numbers for how many rows and columns will be displayed pd.set_option('display.min_rows', 10) #default will be 10 pd.set_option('display.max_columns', 20) 2. Loading Different Data Formats Into a Pandas Data Frame Pandas is an easy tool for reading and writing different types of files format. Using these tools we can load CSV, Excel, Pdf, JSON, HTML, HDF5, SQL, Google BigQuery, etc file easily. Here are some methods, I will show you how we can read and write most frequently using file format. Reading CSV file CSV (comma separated file) is the most popular file format. Reading this file we used the simply read.csv() function. # read csv file df = pd.read_csv('dataset/online_store_customer_data.csv') df.head(3) We can add some common parameters to tweak this function. If we need to skip some first rows in the data frame then we can use skiprows a keyword argument. For example, If we want to skip the first rows then we use skiprows=2. Similarly, if we don’t want to last 2 rows then we can simply use skipfooter=2 . If we don’t want to load the column header then we can use header=None . # Loading csv file with skip first 2 rows without header df_csv = pd.read_csv('dataset/online_store_customer_data.csv', skiprows=2, header=None) df_csv.head(3) Read CSV file from URL For reading the CSV file form URL, you can directly pass the link. # Read csv file from url url=" df_url = pd.read_csv(url) df_url.head(3) Write CSV file When you want to save a data frame on a CSV file you can simply use to.csv() the function. You also need to pass the file name and it will save that file. # saving df_url dataframe to csv file df_url.to_csv('dataset/csv_from_url.csv') df_url.to_csv('dataset/demo_text.txt') Read text file Reading a plain text file, we can use read_csv() the function. In this function, you need to pass the .txt file name. # read plain text file df_txt = pd.read_csv("dataset/demo_text.txt") Read Excel file To read an Excel file, we should use read_excel() the function of the pandas package. If we have had multiple sheet names then we can pass the sheet name argument with this function. # read excel file df_excel = pd.read_excel('dataset/excel_file.xlsx', sheet_name='Sheet1') df_excel Write Excel file We can save our data frame to an excel file same as a CSV file. You can use to_excel() function with file name and location. # save dataframe to the excel file df_url.to_csv('demo.xlsx') 3. Data preprocessing Data preprocessing is the process of making raw data to clean data. This is the most crucial part of data science. In this section, we will explore data first then we remove unwanted columns, remove duplicates, handle missing data, etc. After this step, we get clean data from raw data. 3.1 Data Exploring Retrieving rows from a data frame. After the loading data, the first thing we did to look at our data. For this purpose we use head() and tail() function. The head function will display the first rows and the tail will be the last rows. By default, it shows 5 rows. Suppose we want to display the first 3 rows and the last 6 rows. We can do it this way. # display first 3 rows df.head(3) # display last 6 rows df.tail(6) Retrieving sample rows from a data frame. If we want to display sample data then we can use sample() a function with the desired number of rows. It will show the desired number of random rows. If we want to take 7 samples we need to pass 7 in the sample(7) function. # Display random 7 sample rows df.sample(7) Retrieving information about the data frame To display data frames information we can use info() the method. It will display columns data types, counting each column’s total non-null values and its memory space. df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 2512 entries, 0 to 2511 Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Transaction_date 2512 non-null object 1 Transaction_ID 2512 non-null int64 2 Gender 2484 non-null object 3 Age 2470 non-null float64 4 Marital_status 2512 non-null object 5 State_names 2512 non-null object 6 Segment 2512 non-null object 7 Employees_status 2486 non-null object 8 Payment_method 2512 non-null object 9 Referal 2357 non-null float64 10 Amount_spent 2270 non-null float64 dtypes: float64(3), int64(1), object(7) memory usage: 216.0+ KB Display data types of each column we can use the dtypes attribute. We can add value_counts() methods in dtypes for showing all data types values counting. # display datatypes df.dtypes Transaction_date object Transaction_ID int64 Gender object Age float64 Marital_status object State_names object Segment object Employees_status object Payment_method object Referal float64 Amount_spent float64 dtype: object df.dtypes.value_counts() object 7 float64 3 int64 1 dtype: int64 Display the number of rows and columns. To display the number of rows and columns we use the shape attribute. The first number and last number show the number of rows and columns respectively. df.shape (2512, 11) Display columns name and data To display the columns name of our data frame we use the columns attribute. df.columns Index(['Transaction_date', 'Transaction_ID', 'Gender', 'Age', 'Marital_status', 'State_names', 'Segment', 'Employees_status', 'Payment_method', 'Referal', 'Amount_spent'], dtype='object') If we want to display single or multiple columns data, simply we need to pass column names with a data frame. To display multiple columns of data information, we need to pass the list of columns’ names. # display Age columns first 3 rows data df['Age'].head(3) 0 19.0 1 49.0 2 63.0 Name: Age, dtype: float64 # display first 4 rows of Age, Transaction_date and Gender columns df[['Age', 'Transaction_date', 'Gender']].head(4) Retrieving a Range of Rows If we want to display a particular range of rows we can use slicing. For example, if we want to get 2nd to 6th rows we can simply use df[2:7]. # for display 2nd to 6th rows df[2:7] # for display starting to 10th df[:11] # for display last two rows df[-2:] 3.2 Data Cleaning After the explore our datasets may need to clean them for better analysis. Data coming in from multiple sources so It’s possible to have an error in some values. This is where data cleaning becomes extremely important. In this section, we will delete unwanted columns, rename columns, correct appropriate data types, etc. Delete Columns name We can use the drop function to delete unwanted columns from the data frame. Don’t forget to add inplace = True and axis=1. It will change the value in the data frame. # Drop unwanted columns df.drop(['Transaction_ID'], axis=1, inplace=True) Change Columns name For changing columns name we can use rename() function with passing columns dictionary. In a dictionary, we will pass key like an old column name and value as a new desired column name. For example, now we are going to change Transaction_date and Gender to Date and Sex. # create new df_col dataframe from df.copy() method. df_col = df.copy() # rename columns name df_col.rename(columns={"Transaction_date": "Date", "Gender": "Sex"}, inplace=True) df_col.head(3) Adding a new column to a Data Frame You may add a new column to an existing pandas data frame just by assigning values to a new column name. For example, the following code creates a third column named new_col in df_col data frame: # Add a new_col column which value will be amount_spent * 100 df_col['new_col'] = df_col['Amount_spent'] * 100 df_col.head(3) String value change or replace We can replace the new value with the old, with .loc() the method with help of the condition. For Example, now we are changing Female to Woman and Male to Man in Sex column. df_col.head(3) # changing Female to Woman and Male to Man in Sex column. #first argument in loc function is condition and second one is columns name. df_col.loc[df_col.Sex == "Female", 'Sex'] = 'Woman' df_col.loc[df_col.Sex == "Male", 'Sex'] = 'Man' df_col.head(3) Now Sex columns values are changed Female to Woman and Male to Man. Datatype change When we deal with different types of data types sometimes it’s a tedious task. If we want to work on a date we must need to change this with the exact date format. Otherwise, we get the problem. This task is easy on pandas. We can use astype() function to convert one data type to another. df_col.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 2512 entries, 0 to 2511 Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Date 2512 non-null object64 9 Amount_spent 2270 non-null float64 10 new_col 2270 non-null float64 dtypes: float64(4), object(7) memory usage: 216.0+ KB In our Date columns, it’s object type so now we will convert this to date types, and also we will convert Referal columns float64 to float32. # change object type to datefime64 format df_col['Date'] = df_col['Date'].astype('datetime64[ns]') # change float64 to float32 of Referal columns df_col['Referal'] = df_col['Referal'].astype('float32') df_col.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 2512 entries, 0 to 2511 Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Date 2512 non-null datetime64[ns]32 9 Amount_spent 2270 non-null float64 10 new_col 2270 non-null float64 dtypes: datetime64[ns](1), float32(1), float64(3), object(6) memory usage: 206.2+ KB 3.3 Remove duplicate In the data preprocessing part, we need to remove duplicate entries. For different kinds of reasons sometimes our data frames have multiple duplicate entries. Removing duplicate entries can be easily done with help of the pandas function. First, we use duplicated() function for identifying duplicate entries then we use drop_duplicates() for removing them. # Display duplicated entries df.duplicated().sum() 12 # duplicate rows dispaly, keep arguments will--- 'first', 'last' and False duplicate_value = df.duplicated(keep='first') df.loc[duplicate_value, :] # dropping ALL duplicate values df.drop_duplicates(keep = 'first', inplace = True) 3.4 Handling missing values Handling missing values in the common task in the data preprocessing part. For many reasons most of the time we will encounter missing values. Without dealing with this we can’t do the proper model building. For this section first, we will find out missing values then we decided how to handle them. We can handle this by removing affected columns or rows or replacing appropriate values there. Display missing values information For displaying missing values we can use isna() function. Counting total missing values in each column in ascending order we use .sum() and sort_values(ascending=False) function. df.isna().sum().sort_values(ascending=False) Amount_spent 241 Referal 154 Age 42 Gender 28 Employees_status 26 Transaction_date 0 Marital_status 0 State_names 0 Segment 0 Payment_method 0 dtype: int64 Delete Nan rows If we have less Nan value then we can delete entire rows by dropna() function. For this function, we will add columns name in subset parameter. # df copy to df_copy df_new = df.copy() #Delete Nan rows of Job Columns df_new.dropna(subset = ["Employees_status"], inplace=True) Delete entire columns If we have a large number of Nan values in particular columns then dropping those columns might be a good decision rather than imputing. df_new.drop(columns=['Amount_spent'], inplace=True) df_new.isna().sum().sort_values(ascending=False) Referal 153 Age 42 Gender 27 Transaction_date 0 Marital_status 0 State_names 0 Segment 0 Employees_status 0 Payment_method 0 dtype: int64 Impute missing values Sometimes if we delete entire columns that will be not the appropriate approach. Delete columns can affect our model building because we will lose our main features. For imputing we have many approaches so here are some of the most popular techniques. Method 1 — Impute fixed values like 0, ‘Unknown’ or ‘Missing’ etc. We impute Unknown in Gender columns df['Gender'].fillna('Unknown', inplace=True) Method 2 — Impute Mean, Median, and Mode # Impute Mean in Amount_spent columns mean_amount_spent = df['Amount_spent'].mean() df['Amount_spent'].fillna(mean_amount_spent, inplace=True) #Impute Median in Age column median_age = df['Age'].median() df['Age'].fillna(median_age, inplace=True) # Impute Mode in Employees_status column mode_emp = df['Employees_status'].mode().iloc[0] df['Employees_status'].fillna(mode_emp, inplace=True) Method 3 — Imputing forward fill or backfill by ffill and bfill. In ffill missing value impute from the value of the above row and for bfill it’s taken from the below rows value. df['Referal'].fillna(method='ffill', inplace=True) df.isna().sum().sum() 0 Now we deal with all missing values with different methods. So now we haven’t any null values. 4. Memory management When we work on large datasets, There we get one big issue is a memory problem. We need too large resources for dealing with this. But there are some methods in pandas to deal with this. Here are some methods or strategies to deal with this problem with help of pandas. Change datatype From changing one datatype to another we can save lots of memory. One popular trick is to change objects to the category it will reduce our data frame memory drastically. First, we will copy our previous df data frame to df_memory and we will calculate the total memory usage of this data frame using memory_usage(deep=True) method. df_memory = df.copy() memory_usage = df_memory.memory_usage(deep=True) memory_usage_in_mbs = round(np.sum(memory_usage / 1024 ** 2), 3) print(f" Total memory taking df_memory dataframe is : {memory_usage_in_mbs:.2f} MB ") Total memory taking df_memory dataframe is : 1.15 MB Change object to category data types Our data frame is small in size. Which is 1.15 MB. Now We will convert our object datatype to category. # Object datatype to category convert df_memory[df_memory.select_dtypes(['object']).columns] = df_memory.select_dtypes(['object']).apply(lambda x: x.astype('category')) #64 9 Amount_spent 2500 non-null float64 dtypes: category(7), float64(3) memory usage: 189.1 KB Now its reduce 1.15 megabytes to 216.6 KB. It’s almost reduced 5.5 times. Change int64 or float64 to int 32, 16, or 8 By default, pandas store numeric values to int64 or float64. Which takes more memory. If we have to store small numbers then we can change to 64 to 32, 16, and so on. For example, our Referral columns have only 0 and 1 values so for that we don’t need to store at float64. so now we change it to float16. # Change Referal column datatypes df_memory['Referal'] = df_memory['Referal'].astype('float32') #32 9 Amount_spent 2500 non-null float64 dtypes: category(7), float32(1), float64(2) memory usage: 179.3 KB After changing only one column’s data types we reduce 216 KB to 179 KB. Note: Before changing datatype please make sure it’s consequences. 5. Data Analysis 5.1. Calculating Basic statistical measurement In the data analysis part, we need to calculate some statistical measurements. For calculating this pandas have multiple useful functions. The first useful function is describe() the function it will display most of the basic statistical measurements. For this function, you can add .T for transforming the display. It will make it easy to look at when there are multiple columns. df.describe().T The above function only shows numerical column information. count shows how many values are there. mean shows the average value of each column. std shows the standard deviation of columns, which measures the amount of variation or dispersion of a set of values. min is the minimum value of each column. 25%, 50%, and 75% show total values lie in that groups, and finally max shows maximum values of that columns. We know already above code will display only numeric columns basic statistical information. for object or category columns we can use describe(include=object) . df.describe(include=object).T The above information, count shows how many values are there. unique is how many values are unique in that column. The top is the highest number of values lying in that category. freq shows how many values frequently lie on that top values. We can calculate the mean, median, mode, maximum values, minimum values of individual columns we simply use these functions. # Calculate Mean mean = df['Age'].mean() # Calculate Median median = df['Age'].median() #Calculate Mode mode = df['Age'].mode().iloc[0] # Calculate standard deviation std = df['Age'].std() # Calculate Minimum values minimum = df['Age'].min() # Calculate Maximum values maximum = df.Age.max() print(f" Mean of Age : {mean}") print(f" Median of Age : {median}") print(f" Mode of Age : {mode}") print(f" Standard deviation of Age : {std:.2f}") print(f" Maximum of Age : {maximum}") print(f" Menimum of Age : {minimum}") Mean of Age : 46.636 Median of Age : 47.0 Mode of Age : 47.0 Standard deviation of Age : 18.02 Maximum of Age : 78.0 Menimum of Age : 15.0 In pandas, we can display the correlation of different numeric columns. For this, we can use .corr() function. # calculate correlation df.corr() 5.2 Basic built-in function for data analysis In pandas, there are so many useful basic functions available for data analysis. In this section, we are exploring some of the most frequently used functions. Number of unique values in the category column To display the sum of all unique values we use nunique() the function of desired columns name. For example, display total unique values in State_names columns we use this function: # for display how many unique values are there in State_names column df['State_names'].nunique() 50 Shows all unique values To display all unique values we use unique() function with the desired column name. # for display uniqe values of State_names column df['State_names'].unique() array(['Kansas', 'Illinois', 'New Mexico', 'Virginia', 'Connecticut', 'Hawaii', 'Florida', 'Vermont', 'California', 'Colorado', 'Iowa', 'South Carolina', 'New York', 'Maine', 'Maryland', 'Missouri', 'North Dakota', 'Ohio', 'Nebraska', 'Montana', 'Indiana', 'Wisconsin', 'Alabama', 'Arkansas', 'Pennsylvania', 'New Hampshire', 'Washington', 'Texas', 'Kentucky', 'Massachusetts', 'Wyoming', 'Louisiana', 'North Carolina', 'Rhode Island', 'West Virginia', 'Tennessee', 'Oregon', 'Alaska', 'Oklahoma', 'Nevada', 'New Jersey', 'Michigan', 'Utah', 'Arizona', 'South Dakota', 'Georgia', 'Idaho', 'Mississippi', 'Minnesota', 'Delaware'], dtype=object) Counts of unique values To show unique values count we use value_counts() method. This function will display unique values with a number of each value that occurs. For example, if we want to know how many unique values of Gender columns with value frequency number of then we use this method below. df['Gender'].value_counts() Female 1351 Male 1121 Unknown 28 Name: Gender, dtype: int64 If we want to show with the percentage of occurrence rather number than we use normalize=True argument in value_counts() function # Calculate percentage of each category df['Gender'].value_counts(normalize=True) Female 0.5404 Male 0.4484 Unknown 0.0112 Name: Gender, dtype: float64 df['State_names'].value_counts().sort_values(ascending = False).head(20) Illinois 67 Georgia 64 Massachusetts 63 Maine 62 Kentucky 59 Minnesota 59 Delaware 56 Missouri 56 New York 55 New Mexico 55 Arkansas 55 California 55 Arizona 55 Nevada 55 Vermont 54 New Jersey 53 Oregon 53 Florida 53 West Virginia 53 Washington 52 Name: State_names, dtype: int64 Sort values If we want to sort data frames by particular columns, we need to use sort_values() the method. We can use sort by ascending or descending order. By default, it’s in ascending order. If we want to use descending order then simply we need to pass ascending=False argument in sort_values() the function. # Sort Values by State_names df.sort_values(by=['State_names']).head(3) For sorting our data frame by Amount_spent with ascending order: # Sort Values Amount_spent with ascending order df.sort_values(by=['Amount_spent']).head(3) For sorting our data frame by Amount_spent with descending order: # Sort Values Amount_spent with descending order df.sort_values(by=['Amount_spent'], ascending=False).head(3) Alternatively, We can use nlargest() and nsmallest() functions for displaying largest and smallest values with desired numbers. for example, If we want to display the 4 largest Amount_spent rows then we use this: # nlargest df.nlargest(4, 'Amount_spent').head(10) # first argument is how many rows you want to disply and second one is columns name For 3 smallest Amount_spent rows # nsmallest df.nsmallest(3, 'Age').head(10) Conditional queries on Data If we want to apply a single condition then first we will give one condition then we pass on the data frame. For example, if we want to display all rows where Payment_method is PayPal then we use this: # filtering - Only show Paypal users condition = df['Payment_method'] == 'PayPal' df[condition].head(4) We can apply multiple conditional queries like before. For example, if we want to display all Married female people who lived in New York then we use the following: # first create 3 condition female_person = df['Gender'] == 'Female' married_person = df['Marital_status'] == 'Married' loc_newyork = df['State_names'] == 'New York' # we passing condition on our dataframe df[female_person & married_person & loc_newyork].head(4) 5.3 Summarizing or grouping data Group by In Pandas group by function is more popular in data analysis parts. It allows to split and group data, apply a function, and combine the results. We can understand this function and use by below example: Grouping by one column: For example, if we want to find maximum values of Age and Amount_spent by Gender then we can use this: df[['Age', 'Amount_spent']].groupby(df['Gender']).max() To find mean, count, and max values of Age and Amount_spent by Gender then we can use agg() function with groupby() . # Group by one columns state_gender_res = df[['Age','Gender','Amount_spent']].groupby(['Gender']).agg(['count', 'mean', 'max']) state_gender_res Grouping by multiple columns: To find total count, maximum and minimum values of Amount_spent by State_names, Gender, and Payment_method then we can pass these columns names under groupby() function and add .agg() with count, mean, max argument. #Group By multiple columns state_gender_res = df[['State_names','Gender','Payment_method','Amount_spent']].groupby([ 'State_names','Gender', 'Payment_method']).agg(['count', 'min', 'max']) state_gender_res.head(12) Cross Tabulation (Cross tab) Cross tabulation(also referred to as cross tab) is a method to quantitatively analyze the relationship between multiple variables. Also known as contingency tables. It will help to understand the correlation between different variables. For creating this table pandas have a built-in function crosstab(). For creating a simple cross tab between Maritatal_status and Payment_method columns we just use crosstab() with both column names. pd.crosstab(df.Marital_status, df.Payment_method) We can include subtotals by margins parameter: pd.crosstab(df.Marital_status, df.Payment_method, margins=True, margins_name="Total") If We want a display with percentage than normalize=True parameter help pd.crosstab(df.Marital_status, df.Payment_method, normalize=True, margins=True, margins_name="Total") In these cross tab features, we can pass multiple columns names for grouping and analyzing data. For instance, If we want to see how the Payment_method and Employees_status are distributed by Marital_status then we will pass these columns’ names in crosstab() function and it will show below. pd.crosstab(df.Marital_status, [df.Payment_method, df.Employees_status]) 6. Data Visualization Visualization is the key to data analysis. The most popular python package for visualization is matplotlib and seaborn but sometimes pandas will be handy for you. Pandas also provide some visualization plots easily. For the basic analysis part, it will be easy to use. For this section, we are exploring some different types of plots using pandas. Here are the plots. 6.1 Line plot A line plot is the simplest of all graphical plots. A line plot is utilized to follow changes over a continuous-time and show information as a series. Line charts are ideal for comparing multiple variables and visualizing trends for single and multiple variables. For creating a line plot in pandas we use .plot() two columns’ names for the argument. For example, we create a line plot from one dummy dataset. dict_line = { 'year': [2016, 2017, 2018, 2019, 2020, 2021], 'price': [200, 250, 260, 220, 280, 300] } df_line = pd.DataFrame(dict_line) # use plot() method on the dataframe df_line.plot('year', 'price'); The above line chart shows prices over a different time. It shows like price trend. 6.2 Bar plot A bar plot is also known as a bar chart shows quantitative or qualitative values for different category items. In a bar, plot data are represented in the form of bars. Bars length or height are used to represent the quantitative value for each item. Bar plot can be plotted horizontally or vertically. For creating these plots look below. For horizontal bar: df['Employees_status'].value_counts().plot(kind='bar'); For vertical bar: df['Employees_status'].value_counts().plot(kind='barh'); 6.3 Pie plot A pie plot is also known as a pie chart. A pie plot is a circular graph that represents the total value with its components. The area of a circle represents the total value and the different sectors of the circle represent the different parts. In this plot, the data are expressed as percentages. Each component is expressed as a percentage of the total value. In pandas for creating pie plot. We use kind=pie in plot() function in data frame column or series. df['Segment'].value_counts().plot( kind='pie'); 6.4 Box Plot A box plot is also known as a box and whisker plot. This plot is used to show the distribution of a variable based on its quartiles. Box plot displays the five-number summary of a set of data. The five-number summary is the minimum, first quartile, median, third quartile, and maximum. It will also be popular to identify outliers. We can plot this by one column or multiple columns. For multiple columns, we need to pass columns name in y variable as a list. df.plot(y=['Amount_spent'], kind='box'); In a box plot, we can plot the distribution of categorical variables against a numerical variable and compare them. Let’s plot it with the Employees_status and Amount_spent columns with pandas boxplot() method: import matplotlib.pyplot as plt np.warnings.filterwarnings('ignore', category=np.VisibleDeprecationWarning) fig, ax = plt.subplots(figsize=(6,6)) df.boxplot(by ='Employees_status', column =['Amount_spent'],ax=ax, grid = False); 6.5 Histogram A histogram shows the frequency and distribution of quantitative measurement across grouped values for data items. It is commonly used in statistics to show how many of a certain type of variable occurs within a specific range or bucket. Below we will plot a histogram for looking at Age distribution. df.plot( y='Age', kind='hist', bins=10 ); 6.6 KDE plot A kernel density estimate (KDE) plot is a method for visualizing the distribution of observations in a data set, analogous to a histogram. KDE represents the data using a continuous probability density curve in one or more dimensions. df.plot( y='Age', xlim=(0, 100), kind='kde' ); 6.7 Scatter plot A scatter plot is used to observe and show relationships between two quantitative variables for different category items. Each member of the data set gets plotted as a point whose x-y coordinates relate to its values for the two variables. Below we will plot a scatter plot to display relationships between Age and Amount_spent columns. df.plot( x='Age', y='Amount_spent', kind='scatter' ); 7. Final Thoughts In this article, we know how pandas can be used to read, preprocess, analyze, and visualize data. It can be also used for memory management for fast computing with fewer resources. The main motive of this article is to help peoples who are curious to learn pandas for data analysis. Do you have any queries or help related to this article please feel free to contact me on LinkedIn. If you find this article helpful then please follow me for further learning. Your suggestion and feedback are always welcome. Thank you for reading my article. Have wonderful learning. Feel free to check out GitHub repo for this tutorial. 8. Reference - Pandas user guide - Pandas 1.x Cookbook - The Data Wrangling Workshop - Python for Data Analysis - Data Analysis with Python: Zero to Pandas — Jovian YouTube Channel - Best practices with pandas — Data School YouTube Channel - Pandas Tutorials — Corey Schafer YouTube Channel - Pandas Crosstab Explained Pandas Complete tutorial for data science in 2022
https://towardsai.net/p/data-science/pandas-complete-tutorial-for-data-science-in-2022
CC-MAIN-2022-21
refinedweb
4,953
57.98
Le 11 février 2020 09:23:20 GMT-05:00, "Ludovic Courtès" <address@hidden> a écrit : >Hi, Julien! > >Julien Lepiller <address@hidden> skribis: > >> So if we agree to follow that plan, we need a new repository and a >> script running somewhere. I can take care of adding the script, but I >> don't know how to create a repo that could be accessible to weblate >> (it needs commit access, so it can't be the same namespace as guix on >> savannah). Ludo? > >Sure! Could you file a “support request” on Savannah to ask for a new >repo in the guix name space? You can put address@hidden in copy so >we receive notifications. Yeah, but as I understand it, it means weblate would have access to every other repo in the guix namespace, which is probably not what we want. Do we want to give a private ssh key to weblate or fedora people that could allow them to push arbitrary commits to guix? :p > >> My proposal was to use fedora's weblate, since I already know >> Jean-Baptiste who is resppnsible for it. Also, it should be faster to >> create a project with them. But I don't really care, maybe hosted has >> a less confusing domain name ^^' > >If it works for you, go for it! > >Earlier I also wrote: > >> BTW, in the meantime, would you feel like uploading a tarball to the >TP >> so we can give translators some time to start updating everything? > >WDYT? I'll do that tonight! > >Thanks, >Ludo’.
https://lists.gnu.org/archive/html/guix-devel/2020-02/msg00167.html
CC-MAIN-2022-21
refinedweb
254
67.99
Writing a Image Processing Codes from Python on Scratch 1.1 What am i using? Numpyfor array operations imageiobuiltin library for reading image warningsto show warning matplotlibfor visualizing 1.2 What this blog includes? - Converting an image into Grayscale from RGB. - Convolution of an image using different kernels. 2 Steps - Initializing a ImageProcessingclass. - Adding a read method - Adding a show method - Adding color converison method - Adding a convolution method Initializing a ImageProcessing class class ImageProcessing: def __init__(self): self.readmode = {1 : "RGB", 0 : "Grayscale"} # this dictionary will hold readmode values Adding a read method def read_image(self, location = "", mode = 1): """ Uses imageio on back. location: Directory of image file. mode: Image readmode{1 : RGB, 0 : Grayscale}. """ img = imageio.imread(location) if mode == 1: img = img elif mode == 0: img = 0.21 * img[:,:,0] + 0.72 * img[:,:,1] + 0.07 * img[:,:,2] elif mode == 2: pass else: raise ValueError(f"Readmode not understood. Choose from {self.readmode}.") return img - This method only wraps the imageio, but i am applying a concept of RGBto GRAYSCALEconversion. - By default, imageio reads on RGB format. - A typical RGBto GRAYSCALEcan be done on below concepts (taken from):- - Average Method: All channels are given 33% contribution. - Weighted Method of luminosity method Red channel have 30%, Green have 59 and Blue have 11% contribution.\ But I am using different version of method (taken from). - If user enter different mode, then raise error. Adding a show method def show(self, image, figsize=(5, 5)): """ Uses Matplotlib.pyplot. image: A image to be shown. figsize: How big image to show. From plt.figure() """ fig = plt.figure(figsize=figsize) im = image plt.imshow(im, cmap='gray') plt.show() Nothing to say here, docstring is enough. Color conversion def convert_color(self, img, to=0): if to==0: return 0.21 * img[:,:,0] + 0.72 * img[:,:,1] + 0.07 * img[:,:,2] else: raise ValueError("Color conversion can not understood.") I have still have not thought about grayscale to RGB conversion. But even using OpenCV cv2.cvtColor(img, cv2.COLOR_GRAY2BGR), we can not get complete BGR image. Adding a convolution method def convolve(self, image, kernel = None, padding = "zero", stride=(1, 1), show=False, bias = 0): """ image: A image to be convolved. kernel: A filter/window of odd shape for convolution. Used Sobel(3, 3) default. padding: Border operation. Available from zero, same, none. stride: How frequently do convolution? show: whether to show result bias: a bias term(used on Convolutional NN) """ if len(image.shape) > 3: raise ValueError("Only 2 and 3 channel image supported.") if type(kernel) == type(None): warnings.warn("No kernel provided, trying to apply Sobel(3, 3).") kernel = np.array([[1, 0, -1], [1, 0, -1], [1, 0, -1]]) kernel += kernel.T kshape = kernel.shape if kshape[0] % 2 != 1 or kshape[1] % 2 != 1: raise ValueError("Please provide odd length of 2d kernel.") if type(stride) == int: stride = (stride, stride) shape = image.shape if padding == "zero": zeros_h = np.zeros(shape[1]).reshape(-1, shape[1]) zeros_v = np.zeros(shape[0]+2).reshape(shape[0]+2, -1) padded_img = np.vstack((zeros_h, image, zeros_h)) # add rows padded_img = np.hstack((zeros_v, padded_img, zeros_v)) # add cols image = padded_img shape = image.shape elif padding == "same": h1 = image[0].reshape(-1, shape[1]) h2 = image[-1].reshape(-1, shape[1]) padded_img = np.vstack((h1, image, h2)) # add rows v1 = padded_img[:, 0].reshape(padded_img.shape[0], -1) v2 = padded_img[:, -1].reshape(padded_img.shape[0], -1) padded_img = np.hstack((v1, padded_img, v2)) # add cols image = padded_img shape = image.shape elif padding == None: pass rv = 0 cimg = [] for r in range(kshape[0], shape[0]+1, stride[0]): cv = 0 for c in range(kshape[1], shape[1]+1, stride[1]): chunk = image[rv:r, cv:c] soma = (np.multiply(chunk, kernel)+bias).sum() try: chunk = int(soma) except: chunk = int(0) if chunk < 0: chunk = 0 if chunk > 255: chunk = 255 cimg.append(chunk) cv+=stride[1] rv+=stride[0] cimg = np.array(cimg, dtype=np.uint8).reshape(int(rv/stride[0]), int(cv/stride[1])) if show: print(f"Image convolved with \nKernel:{kernel}, \nPadding: {padding}, \nStride: {stride}") return cimg What is happening above? - First the kernel is checked, if not given, used from sobel 3 by 3 - If the given kernel shape is not odd, error is raised. - For padding, numpystack methods are used. - Initialize an empty list to store convoluted values - For convolution, - we loop through every rows in step of kernel's row upto total img rows - loop through every cols in step of kernel's col up to total img cols - get a current chunk of image and multiply its elements with the kernel's elements - if current sum is geater than 255, set it 255 - append sum to list - Finally convert the list into array then into right shape. Recall the mathematics of Convolution Operation Where f is a image function and h is a kernel or mask or filter. What happens on convolution can be clear from the matrix form of operation. Lets take a image of 5X5 and kernel of 3X3 sobel y. (Using KaTex for Matrix was hard so I am posting image instead.) We have to move the kernel over the each and every pixels of the image from top left to bottom. Placing a kernel over a image and taking a elementwise matrix multiplication of the kernel and chunk of image of the kernel shape. For most cases, we use odd shaped kernel. By using odd shaped kernel, we can place a center of kernel to the center of image chunk. Now we try to start from the top right pixel, but since our kernel is 3 by 3, we don't have any pixels that will be facing the 1st row of kernel. So we have to work with the concept of padding or we will loose those pixels of the border. For the sake of simplicity, lets take a zero padding. Now the first chunk of image will be: Now the convolution operation: Similarly, the final image will be like below after sliding through row then column: But we will set 255 to all values which exceeds 255. A better visualisation of a convolution operation can be seen by below gif(i don't own this gif):- Finally, visualizing our convolutated image:- ip = ImageProcessing() img = np.array([1, 10, 11, 200, 30, 12, 200, 152, 223, 60, 100, 190, 11, 20, 10, 102, 207, 102, 223, 50, 18, 109, 117, 200, 30]).reshape(5, 5) cv = ip.convolve(img) ip.show(cv) If we printed the output of this code, i.e. cv, then we will see the array just like above. I have written a code to do Convolution Neural Network from scratch using Python too, please read it here. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/qviper/writing-a-image-processing-codes-from-python-on-scratch-4kd6
CC-MAIN-2021-31
refinedweb
1,130
68.87
Mozilla/Firefox Bug Allows Arbitrary Program Execution 940 treefort writes "An article at eWeek has the lowdown. The article also has a link to the bug report which addressed this issue some time ago. Still, I feel safer using Firefox since malicious persons are much more unlikely to target any vulnerabilites. Note that this only affects users of Mozilla and Firefox on Windows XP or Windows 2000." New releases are already available on mozilla.org that fix this. Update: 07/09 00:41 GMT by CN : I removed the bum link to Bugzilla, since I guess they don't like us. Also I discovered that OSDN's own NewsForge has more on the situation. A clear advantage (Score:5, Informative) FYI, in case you didn't read the article, you can download the fix here [mozilla.org]. Re:A clear advantage (Score:5, Interesting) Come on we blast M$ for putting vbscripting and whatnot in IE, but this is just as dumb. It's not "in" the browser (Score:5, Informative) Bad way (Score:5, Interesting) IE bad because it is integrated into the OS Moz bad because it calls the OS because it's not integrated Both are bad. In fact, this is quite bad for Moz, as one of the touted improvements is that not being OS-integrated avoids such issues. Basically, you're passing on data from the windows URI handler... so it's almost like importing a windows IE/Web insecurity into Moz. Perhaps if Moz just imported the windows URI handlers as a datafile, and stripped out known baddies? Re:Bad way (Score:5, Interesting) Relying on stripping out "known baddies" means that what you're really relying on is your list of known baddies. Any new baddie is, by definition, not on that list. Stripping them out is a start (web pages don't need access to shell://), but it's not a complete solution. Re:Bad way (Score:5, Interesting) Re:Bad way (Score:5, Informative) From the article: So in other words, this fix only changes a pref which is easy to do without a huge download, etc. and is easy for the clueless, since it requires one click. Future versions will have a fix for the problem in general, rather than just this specific case. Re:Bad way (Score:5, Insightful) 0.9.1 was the same. The release notes were unchanged since 0.9 and there was just a note saying "minor bugfixes" in one place, and another note saying "critical update" somewhere else. Firefox is a great product, but they really need to do something about keeping users informed about their releases. We can't all be expected to browse through Bugzilla to see what has changed between releases. Re:Bad way (Score:4, Interesting) It is in fact an IE insecurity too as i just tested it with internet explorer and windows 2000 at this link: [mccanless.us] so it is infact an OS vunerability and not browser specific. Infact, we have a patch and IE doesnt. That makes me feel good Re:Bad way (Score:5, Insightful) Not at all. Mozilla falls down by trusting the multiple OSs it supports to securely handle something it doesn't understand. You did notice the part of the story that specifies this as a Mozilla/XP/2K exploit, right? No problem in Linux or *Bsd, etc., so I don't know how this OS intregration angle is relevant at all. Re:Bad way (Score:4, Funny) IMHO, they should worry more about security with the Linux version than the Windows one, as anybody using Windows has pretty clearly shown that they don't care much about security anyway. Re:Serendipity! Vindication in under one day! (Score:5, Insightful) And you also realize that, if Gecko had only been put in Free Computing systems, it would have essentially rotted away to nothingness years ago. Of course, you're also completely ignoring the amazing PR spin Mozilla is for Open Source. Sure, it has a bugs and holes--but those bugs are publicly filed, honestly reported, and fixed in a VERY timely fashion. (Then again, you're comparing Free Computing and pregnancy.) Re:Serendipity! Vindication in under one day! (Score:4, Insightful) I really hope that if the mainstream media does stories on this they will make it clear that: 1. This is not a problem with the browser, it is a problem with the OS 2. The problem with the OS was alegedly fixed by a previous MS patch... except it wasn't - MS obviously don't test their patches. 3. Even though it was not Mozilla's own problem they still jumped and fixed it within a day of the report. 4. Microsoft knew about the latest IE hole 10 months before it was exploited and still did nothing about it. Re:Serendipity! Vindication in under one day! (Score:4, Insightful) I totally disagree with you. As a user that is stuck on an XP platform because where I work I have no say (and I am far from alone here!), I am absolutely overjoyed that the coding community "wastes" its time and resources to allow me to use my home browser at work. Last time I checked, the community was not out to "make windows 'secure'," but was instead out to make good software for people to use freely. Granted, I am probably starting another flamewar here (which free, blablabla), but I think you need to leave it to the people doing the coding to decide how to spend their time and energy and not foist alternate agendas upon them. Re:It's not "in" the browser (Score:5, Informative) Agreed. It's not really a bug in the browser, it's a flaw in Windows. Windows has a bunch of protocol handlers registered. Mozilla knows how to handle a few (e.g. http, ftp, etc.). Whenever it encounters a protocol it doens't know what to do with, it sees if Windows knows how to handle it. Windows either handles it in some way or it doesn't. If it doesn't, Mozilla puts up a message saying "xyz is not a registered protocol." Mozilla has no way of knowing that anything is bad or dangerous. The real bug is in Windows. The only real options the Mozilla developers have is to black/white list known dangerous protocols or simply don't allow protocols Mozilla itself doesn't handle. Neither are optimal. If you can't trust the OS you're on, you really limit yourself, bugs or not. So we banish the "shell" protocol today. Who's to say Windows won't have another flaw in another protocol tomorrow? This really isn't any different than plugins, which are in a sense, external protocol handlers. i.e. they know how to handle certain content...just like a protocol handler. What if there is an exploit in a plugin? Mozilla just starts the plugin with the listed parameters and lets it go. Are you going to blame Mozilla for allowing the plugin to run, or are you going to require that Mozilla not allow "known, dangerous plugins" to run? Re:This is a Mozilla problem (Score:5, Insightful) Your analogy isn't quite right... let's think about this another way... you have a plugin you've installed that has a security flaw in it. Is Mozilla (or IE or any other browser) responsible for the security flaw? The registration of external protocol handlers is common practice across different platforms and browsers. I use OS X primarily at work and at home. I also run Linux here and have a Windows laptop at work. All three platforms use external protocol handlers to register helper applications. The part that I think is significant is that the OS registered a protocol handler that isn't safe in an internet context. So, you either blame the browser for doing what the OS manufacturer recommends you do... or you blame the fool who wrote the insecure protocol handler (and why the hell would you want a "run any program" protocol handler????) Sujal Re:This is a Mozilla problem (Score:4, Insightful) Look though my comment history and see what I think of plugins. (hint, they suck) Yes, this is a mozilla problem. Here is the deal. When you develop an application where anyone in the world has input to that program you check the input for valid data and reject anything that is not valid. Period. A uri handler called shell:// is stupid. Thats as if your leaving an open rsh or ssh port with no password. Again, this is the first time I've heard of such a handler, and I don't know exactly what it does or is supposed to do but the fact that its called shell tells me that its not something that belongs on an internet application. Name me one more network application that would accept arbitrary commands without a password to be run on a computer. Just one. Re:This is a Mozilla problem (Score:4, Insightful) We agree about the stupidity of a shell:// handler... but Mozilla didn't provide it. I'm not sure what "valid data" they should be checking for here... the only thing I see at this point is that they need to start maintaining a black list of protocol schemes... Of course, if a particular bit of spyware/adware becomes popular, for example, they'll just be chasing down changing schemes. Sujal Re:It's not "in" the browser (Score:4, Insightful) Yes, blame Microsoft. If you RTFA, you'd notice that Microsoft themselves fixed this bug in the next XP service pack (which won't be released for several more months...) Mozilla's quickfix was to just turn the protocol off. The Mozilla developer's shouldn't be babysitting the Windows OS. It's an operating system protocol handler, just like any other registered helper app. What do you recommend happen if Flash has an exploit? Have Mozilla not load the flash plugin? No, it's a bug in Flash and we expect Macromedia to fix it. This is not any different. But in the mean time, since this shell handler is not really used, the quick fix is to simply ignore the shell protocol (i.e. don't hand it off to the OS). The other fix is to dig into the registry and turn off the shell handler yourself. Re:It's not "in" the browser (Score:5, Insightful) Linux and Mac do not have such as thing to handle the "shell" protocol, thus it's not possible for them to have this flaw. Windows (in fact just 2000 and XP) are the only OSes that are vulnerable. Why? Because Microsoft wrote a dangerous handler that's not secure. If it was secure, no one would be talking about this right now. That fact that Microsoft themselves have fixed this bug in the next XP service pack doesn't tell you it's an MS bug? I certainly understand it. It appears, however, that you do not. Mozilla is not arbitrarily launching a shell process merely because someone had a "shell:..." URI. It's asking the OS if it has an application that handles this protocol. Windows says yes and tells it how to launch the program. It passes the parameters to the application (just like any other helper app or plugin) and it's this application's responsiblility to check parameters. How is this any different than, say, registering my XYZ program to handle the "xyz" protocol and the XYZ application has a flaw that is exploitable? Mozilla itself doesn't know one handler from another, and it shouldn't care. The system says "this application handles this protocol/content", so Mozilla hands it off. Re:It's not "in" the browser (Score:5, Insightful) The question remains: why does Mozilla "hand off" stuff from the Internet to the operating system? It obviously can't determine that doing so is safe, so it shouldn't do it. If you were able to run Windows with real restricted user accounts, this wouldn't really be such a problem. Oh, nonsense. Mozilla doesn't run with "real restricted user accounts" on UNIX/Linux either. The responsibility of deciding what is trusted and what is safe to "hand off" to the OS rests firmly with applications on most modern operating systems; every application programmer should know that, and it is not hard to program accordingly. Re:A clear advantage (Score:5, Insightful) Re:A clear advantage (Score:5, Informative) Yeah, it was years before it was addressed. If you read the Bugzilla report, it was first opened in 2002. This is not a good example of "open software fixes things faster". Re:A clear advantage (Score:5, Informative) Re:A clear advantage (Score:5, Insightful) However much developer attention it received (and actually it wasn't much - see my comments below), it doesn't change the fact that this exploit was present for almost two years The speed with which a fix was issued after the general public was made aware of the problem was good The specific comments you cite are indicative of this lack of concern- Comment #2 basically claims that it's not worth fixing security issues that are initiated without any form of user intervention whatsoever. And why? because it's easy enough to get a luser to click on a malicious link, so why should we worry about sites that just bypass the malicious click?? I don't know about everyone else here, but that sort of logic concerns me! Just looking at the amount of interest in this bug after 2002 (only brief two comments in 2003 and another two in 2004; no patches submitted or even thought about) seems to suggest that if this had not been reported by the internet media this would never have been fixed. Or at least, not until exploits of it became commonplace. And with the recent internet-banking trojans using a similar exploit (i.e. download and run malicious code without any user prompting) in IE, the issue seems serious enough to me to have warranted a quicker fix. Re:A clear advantage (Score:5, Informative) This isn't really a fix for a security problem in Mozilla, it's a workaround for a security problem in windows... which is why this only affects Mozilla on windows. Re:A clear advantage (Score:5, Insightful) Well, regardless of the cause of the problem, if there's an exploitable hole it's still a security issue. Yes, it wasn't caused by some bad coding in Mozilla, but from reading the bug description and comments the exploit comes through HTML that has little or no valid use in legitimate, friendly web pages. (Hence it was possible for Mozilla to quickly release an all-blocking fix once it became publicised - disabling this funcitonality is not going to inconvenience anyone) In that situation, it still seems negligent to me when you're failing to fix an exploitable hole once it's come to your attention and when there's no disadvantage to doing so. As a very small-scale open-source developer myself, I feel that despite the GPL clauses about no warranty there's still something of a moral duty of care and trust in situations like this. Two years of being aware of this issue and doing little or nothing about it seems a bit worrying, IMO. Re:A clear advantage (Score:5, Insightful) Re:A clear advantage (Score:5, Informative) Will probably end up happening soon. Open online bug tracking has already started for some of their products. Re:A clear advantage (Score:5, Interesting) Heretic, YOU MUST BURN! (Score:4, Funny) Re:A clear advantage (Score:4, Informative) Re:A clear advantage (Score:4, Insightful) When you're writing cross platform code, and it that works perfectly fine on other platforms, and Microsoft keeps saying it's going to fix the bug, but stumbles around like a drunken barfly instead of releasing a fix... this is Mozilla's fault? Microsoft says "Yeah, we're aware of that, we're going to fix it in SP2, it should be out Real Soon Now." and Mozilla takes them at their word, since it's their OS, and all applications on their OS are vulnerable to the bug, so it's in their best interest to get a fix out - and quick. Yet here's an OS bug that's been around since 2002 that Microsoft has made 0 public progress on. And this is Mozilla's fault. For not making a hack to close an OS bug that the OS manufacturer should patch in a reasonably timely fashion. Yet doesn't. Yes, I agree, Mozilla is horrible, and Bill Gates is a saint. Yes. BTW, could I have some of the pills you're taking? They sound wonderful. Re:A clear advantage (Score:5, Informative) Go to the source for better info!!! Re:A clear advantage (Score:4, Insightful) The longer known bugs are out there (and hell, even documented) the more time there is for someone to go out and actually write the exploit. Of course there won't be any exploits available when the bug is first found- unless the person who found the bug is the one who wrote the exploit (a rare case). I doubt in 2002 there was enough attention directed at mozilla to warrant a speedy bugfix, but since so many people are using it now it's under a lot more scrutiny. Now that mozilla is on the "radar" of crackers and other ne'er do wells out there, the exploits of known-but-not-fixed critical bugs are likely to start showing up more often. Re:A clear advantage (Score:5, Informative) Actually 0 is the first mention of the shell: bug. Bug 167475 is a catch all deciding whether or not Mozilla/Firefox should hand off unknown protocols. If it used a whitelist of known protocols as some people suggest then it would break a lot of things relied upon over various platforms. The specific shell: bug was reported only Wednesday morning which gives us a total time of less than 48 hours. Re:A clear advantage (Score:5, Informative) One of the comments explains why this "bug" is so long in being "fixed"--it was suggested that a dialog should be popped up before launching any external app, (which Internet Explorer only started to do sometime this year), but this is inconsistent--external plugins, like Flash, don't get similar dialog boxes in any browser, even though such plugins have been exploited in the past. Also, some programs launch their own dialog warning the user of executing from untrusted environments, and having Mozilla also display a warning is redundant. Essentially, any program that registers itself as a plugin or web protocol is saying "I will take care of the security issues involved with my execution." Therefore, while known dangerous protocols like vbscript were blacklisted (that's why this particular bug is FIXED, even though the comments suggest awareness of the current problem), they didn't implement a whitelist (which I guess is the plan for 1.0) or a dialog box (which Internet Explorer now relies upon, foolishly) because it was not consistent with the behavior towards external plugins. Presumably, with the bad press this has received, Mozilla has realized that Microsoft is going to put whatever-the-hell it wants to in as an external protocol, so unknown protocols should not be trusted. (Something that, apparently, Microsoft themselves has only realized in the last year or so.) shell: protocol is disabled in 0.9.2, and only whitelisted plugins will be trusted in 1.0. I think. Re:A clear advantage (Score:5, Informative) The difference in large part in my opinon boils down to: #1 WHO finds the bug. Is it the developers and community that discovers it in good faith, or is it a hacker and the rest of us find out after a billion dollars has been lost worldwide to the latest worm, virus, etc. #2 As you said, how quickly is the problem fixed. Certainly, private companies aren't necessarily horrible at doing this, to spite what people say. I work for a small software company and assure you that any security issues with our product would be corrected promptly. By the same token, some open source projects w/o a steady lead or direction could have exploits that go unfixed for some time. However, based on my observations and considering those two points, I'd say I certainly feel better using Firefox than IE. Re:A clear advantage (Score:5, Insightful) Re:A clear advantage (Score:4, Interesting) One good thing, though. I've noticed a lot of larger companies are managing their desktops more tightly than they were a few years ago. Also shops running Citrix and Citrix-type environments have an advantage here... rather easy to make sure your users get the latest and greatest. Home users are largely a lost cause however. Your average Joe isn't going to go out downloading update patches. The Windows Update or Software Update (Mac) type things work pretty well but I'm just not sure how many users use them and they don't cover 3rd party apps. Re:A clear advantage (Score:5, Funny) #include int main() { printf("Hello World\n"); return 0; } Re:A clear advantage (Score:4, Funny) #include <stdio.h> int main(int argc, char **argv) { printf("Hello World\n"); return 0; } Re:A clear advantage (Score:5, Funny) Re:A clear advantage (Score:5, Informative) Except for the semicolon, as the other poster pointed out, this does have some portability problems. Not sure if you'd call them bugs or not. You could argue that a preprocessor should allow this, some will indeed choke because there's no space before the <. The 0 is returned to the operating system, but operating systems have different rules for what return values mean. For example, in VMS, even numbers are errors, andwill generate a nasty error message upon completion. Some people argue that the compiler should return "success" when the code says to return a 0. I haven't read anything official that supports that. And if so, how would you return a 0 if that's indeed the error you need to return to the operating system? For maximum portability with ANSI C, you probably want to do something like this: [Slashcode says to use <ECODE> instead of <PRE or <CODE, but how do I inline code or do indentation with <ECODE>?] Even his sig has a typo! Re:A clear advantage (Score:5, Interesting) Opened: 2002-09-09 04:41 PDT. Re:A clear advantage (Score:5, Insightful) But some people [technewsworld.com] seem to be of the opinion that too many patches would be confusing.If this other vendor is right that people want no more than monthly patches, such a fix may have to wait weeks. Re:A clear advantage (Score:5, Informative) Re:A clear advantage (Score:3, Funny) Re:A clear advantage (Score:5, Informative) Valid point. Inspect the XPI before installing it. It's a ZIP file which contains two js files. "install.js" copies "bug250180.js" into the default-prefs folder. "bug250180.js" creates the preference string "network.protocol-handler.external.shell" with the value "false", which disables this particular handler. The complete content of these files:bug250180.js: install.js: ...or something similar to that, which I can't show here because Slashcode fucks it up. Re:A clear advantage (Score:5, Funny) Re:A clear advantage (Score:5, Informative) Forgive me. I'm an idiot when I'm flamebait. Re:A clear advantage (Score:5, Informative) RTFBR (Score:5, Interesting) Reading the bugzilla entries for this and related bugs (an earlier post has the bugzilla url for this bug) is interesting in itself. It shows that the developers well understood the security implications of the bug - but they were also trying to fit the browser into the MS scheme of things in which programs seem (I'm not a windows expert at that level) to be able to register protocols (shell:, vbscript:, irc:) that they get to handle. Disabling this in windows would then lead to Mozilla/Firefox behaving differently than they've come to expect. It was further pointed out that mozilla could require a "yes" click in a dialog window, but that that would lead to other security issues. Interesting reading. And now for some helpful links: (Score:4, Informative) Note: If you click on download links for firefox on the main page of mozilla.org [mozilla.org], you get 0.9.2 [mozilla.org]. The link on the firefox page @ [mozilla.org] still gets you 0.9.1 [mozilla.org]. The link on the main page for the Linux version of Firefox still points to version 0.9.1. It seems that if you want 0.9.2 for Linux you'll have to compile it yourself [mozilla.org]. 0.8 [mozilla.org] 0.9rc [mozilla.org] 0.9 [mozilla.org] 0.9.1 [mozilla.org] 0.9.2 [mozilla.org] And a direct link to the newest release for the really lazy: Windows 0.9.2 [mozilla.org] The question is, what is the shellblock.xpi [newaol.com] for? Does Bugzilla know? Sorry, links to Bugzilla from Slashdot are disabled. Ook! Re:And now for some helpful links: (Score:4, Informative) Note that this only affects users of Mozilla and Firefox on Windows XP or Windows 2000 Re:And now for some helpful links: (Score:5, Informative) Blast! (Score:4, Funny) Re:Blast! (Score:5, Funny) Re:Blast! (Score:4, Funny) Only recent Mozilla bug. (Score:3, Interesting) Re:Only recent Mozilla bug. (Score:3, Informative) Here we go again... (Score:5, Insightful) Just how does Mozilla/FireFox think it's going to keep malware from tricking the users into granting permission when the clueless masses come over from IE? And this line says all I need to know (Score:5, Funny) Sounds like a Windows problem, not a Mozilla problem. Oh, wait a minute... Current versions of Mozilla and Firefox pass unknown protocol handlers to the operating system shell to handle. Ding! Next. However: The attacker would have to know the location in the file system of the program So just in case, I'm renaming my Re:And this line says all I need to know (Score:5, Funny) Well now you've blown it! Hint: Security through obscurity requires obscurity. Huh? (Score:5, Funny) I disagree... if anything, malicious people are MUCH more likely to target vulnerabilities. Open Source Collaboration (Score:3, Insightful) Nobody in their right mind can expect a product to be perfect, but what makes Mozilla different is that bugs are fixed instantly. And that's because of the open source community, which is far more reliable than the competition. People might disagree with me, but I still think these bugs (and their immediate fixes) only show how great open source really is. Re:Open Source Collaboration (Score:5, Informative) Re:Open Source Collaboration (Score:5, Informative) The proposed change wouldn't even have prevented this vulnerability. It would have increased the requirement to exploit it from "Get the victim to visit your site" to "Get the victim to visit your site and click a link". Re:Open Source Collaboration (Score:5, Informative) As other posters have been mistaken, so are you. The bug linked to in the Microsoft bug which affects Firefox (Score:5, Informative) This proves once and for all (Score:5, Funny) Strange coincidence? (Score:3, Interesting) Update system (Score:5, Insightful) Even better, take a leaf out of Norton's liveupdate program. Re:Update system (Score:5, Informative) By default it will periodically check for updates for the main program and extensions. You can even set it up to automatically download and install these updates. Incorrect bug link (Score:5, Informative) The correct bug number for this hole is bug 250180. Concern has been around since 2002 (Score:5, Informative) I have to agree that this is a Mozilla issue. To use a slightly contrived comparison: I read my mail using UW Pine. If someone sends me a script via attachment in email, I do not want Pine to test and see if the interpreter in the she-bang line is available on the host OS. My OS is not my mail reader; I do not want my mail reader allowing everything my OS can do. Ditto my web browser. There appear to be at least three Mozilla Bugzilla Bugs related to this (likely a lot more): #1 = Mozilla Bug 163767 (20 Aug 2002) "Pref to disable external protocol handlers" #2 = Mozilla Bug 167475 (9 Sep 2002) "Disable external protocol handlers in all cases, excluding <A HREF" #3 = Mozilla Bug 250180 (7 Jul 2004) "Shell: protocol allows access to local files"? It appears that Mozilla developers have been worried about this kind of problem going back to at least Aug 2002 (see #1 above). #1 talks about an option to disable external protocol handlers (URI schemes) by default. I have to say that would be the right thing to do. "Secure by default" is the correct approach. #2 talks about an approach that uses context to determine if an external handler should be invokved. Basically, it assumes that if a user clicked a link, they wanted to invoke the handler; anything that happened implictly (such as image loading) should not invoke an external handler. I do agree with those who commented (in that bug) that this is not the right approach. It adds complexity, and it still fails to address the fact that clicking a link is not something that should just up and run anything the web page wants. If I wanted that, I'd use MSIE. #3 is a reference to the "shell:" URI scheme in particular being abused this way. It blocks the "shell:" scheme to prevent that abuse. It does nothing to prevent abuses of other possible schemes, though. I suspect we may see this "feature" of Mozilla rear its ugly head again in the future. This is not a failure of Open Source in particular. Nor does it prove Mozilla is crap or Microsoft is okay after all. It means that people make mistakes. This should not surprise anyone. Stop pointing fingers and fix the problem. Intentional (Score:5, Funny) Oh yes, that's right! I went there. Blacklisting vs. Whitelisting (Score:5, Insightful) Duh. I have been saying this for some time now: Never use blacklists. Always use whitelists. If you forget to put an insecure operation on a security blacklist, you have a security hole. If you forget something on a whitelist, you just have an inconvenience. I am disappointed that the Mozilla developers did not have enough common sense to use whitelists in the first place. But then, it seems like most computer security schemes are blacklist-based, which explains why computers are so insecure. Re:Blacklisting vs. Whitelisting (Score:5, Insightful) One of the big disadvantages to the whole blacklist/whitelist things is, indeed, inconvenience. But you seem to be thinking it's just a minor inconvenience where, to a lot of people, it's major. Example: A while ago (I don't know if they still do, but it wouldn't surprise me) Unreal registered unreal:// to open games. You didn't have to do anything, it just worked. A lot of sites relied on this (click hyperlink, open unreal, badabing badaboom). Now, if the web browser used a whitelist, there's a few options. First off, it could be utterly impossible for Unreal to register even with user assistance - bzzt, this is bad. Remember, users want things to be easy. Second, it could require the user to go through the steps to add unreal:// to their settings. Also bad, because the Unreal coders don't want to have to change their installer every time the interface changes. Plus it's irritating for users. Bzzt. Third, it could ask the browser/OS to register itself, and the browser/OS could pop up a confirmation box. But we already know users can be duped into clicking just about anything ("You MUST click Yes for real 100% hardcore xxx porn!") and so this wouldn't exactly be a rock-hard barrier. Bzzt. Fourth, it can do what it does now, which is also flawed. Bzzt. I personally think solution 3 is the best one - but if Windows doesn't already have hooks for things like this, it might not be practical for Mozilla to add a happy little dialog. There might be a way to query the system about what it *would* do it if we happened to pass it an unreal:// url, then prompt the user to see if that's what they really want to happen, but I bet that's exploitable also ("What's this rundll thing? Oh, the line says 'free porn'! I'll click yes") I'd agree that more security = better (and more convenience = better too - the trick lies in balancing the two), but just saying "we should use a whitelist" leaves so much undecided that it's almost useless. Webpage should highlight the patch more (Score:5, Insightful) No problem for that other alternative browser... (Score:4, Insightful) Lesson of today: there is always a danger in presenting yourself as 'the save alternative'. Proper engineering can reduce risks, but there are never garantees. Not that this example was especially worrying imho: you'd still have to be tricked to visit a specific website that plans to harm you. Not that likely unless you to tend to visit the bowels of the web... Damn straight it's a bug in Windows! (Score:5, Insightful) Mozilla and Firefox should NOT be using this functionality, they should be doing ALL their own URL parsing and handling on Windows, Linux, Mac OS X, and so on, because they can *not* depend on the native OS to do security right. Even Apple doesn't do it right (see how they 'fixed' the help: problem), and Microsoft has refused to fix it on their side even under threat of judicial dismemberment. From the article: Is this really a security hole? When Mozilla receives a shell: request, it passes it on to an external handler in Windows. The "fix" for this is to disable this functionality which, as far as I can tell, is totally unnecessary to begin with. External handlers -- programs outside Mozilla -- have no specific security model, so the only way to deal with them is to make individual exceptions like this one. Messy? Yes. But that's Windows. The only way to deal with this is ONLY use external handlers you know are safe, rather than using all but the handlers you know have holes in them. Anything else is just following Microsoft's lead into a decade of virus-mania. This IS 100% Mozilla's fault (Score:5, Insightful) I am shocked that everyone here is sticking on Mozilla's side. I love Mozilla, and have used it since the beta versions. I install it on mom & pop computers all the time for security. But this is definitely Mozilla's fault. Mozilla should not pass unknown protocols to explorer. IMHO, that defeats the purpose of Mozilla. That would be like coding Mozilla to pass ActiveX controls to Internet Explorer since it doesn't support them. I treat Mozilla as a standalone app, and I consider that an advantage. I'm not vulnerable to scripting exploits, MS Office exploits, etc. But now I am told it passes some work to Explorer. I consider that a bug. I don't want it to pass everything except shell: to IE. I want it to pass nothing to IE. How can I disable all external protocols (Score:4, Insightful) Is there some way I can disable them all? Re:Just to be fair... (Score:3, Insightful) Re:Just to be fair... (Score:5, Interesting) Last weekend, I converted three people from IE6 to Moz FF 0.9.1, based on the facts that it's more secure than IE. And now I'm reading that it has a critical issue (whether it is a bug or not, but it is an issue). How to get their machines pached without my intervention? Where is that big red bouncing icon that appears when starting FF, which says that "you need to install this/these updates immediately to keep your machine secure"? Hello, FF developers! Critical FF updates are not found on windowsupdate.microsoft.com! [microsoft.com] Where is your own auto-update feature? Re:Just to be fair... (Score:5, Informative) Tools -> Options -> Advanced -> Software Update. To check manually: Tools -> Extensions -> Update. It's not perfect yet, but remember, it's still 0.9.x, not 1.0. (Wait, you did want an answer, right?) Re:Just to be fair... (Score:5, Insightful) Re:bias (Score:4, Insightful) Um, Seriously, if you think that's not true, you need to get your head examined - of course people are much less likely to target these vulnerabilities, because a much larger percentage of people currently use IE than firefox, not to mention that those who do use firefox are more likely to be at least slightly more savvy web users that their IE using conterparts. Hence there is less insentive for those with malicious intentions to target firefox (for now at least.) So, how is the truth bias? Re:bias (Score:5, Insightful) It's just frustrating to hear people whine about security via lower market share, but then excuse serious flaws using that logic when it's convenient. I don't, however, refute the point. I'm just of the camp that would prefer stories to at least feign subjectivity, and leave the opinion for the comments. Re:Next! (Score:5, Funny) NCSA Mosaic? No, it doesn't. (Score:3, Informative) Also note that this is a problem with Windows URI Handler rather than Mozilla. Mozilla passes any protocol it doesn't understand to Windows, and Windows uses it to execute a local file. That's why this problem doesn't exist in anything but Windows. This just goes to show that Microsoft makes insecure software, and that insecurity often bleeds into ot Re:Two beefs... (Score:4, Informative) I don't like that either. Nor the mozilla devs. So they posted a patch via an extension [mozilla.org] to be applied to ff, tb and seamonkey. Cheers... Re:Congratulations (Score:3, Insightful) I don't hear the OSS community pretending their software has no bugs or holes. Re:Firefox pass unknown protocol handlers to the O (Score:5, Insightful) Yup. Because Mozilla, as a local application, has a much higher set of privs than a remote website does. This is basically taking code (high-level instructions, but code) from a known insecure zone and telling the OS to run it without any built-in safeguards. And what do you know: we have an exploit. Here's a fun example of how IE gets it right. Take the URI from another example on this discussion. Type that into start/run on a Windows box - it works. Type it into the Address bar of IE - it works. Toss it into a webpage on the local machine and click on it - it works. Toss that webpage onto a remote server and click on it - it doesn't work any more. Different behaviors for different levels of trust. Mozilla defeats this by passing things to the shell with the same level of trust as the user has given it, the local program, which includes the (necessary) ability to mess with the filesystem. Re:What moron put in "shell:"? (Score:5, Insightful) Re:Shellblock XPI... (Score:4, Informative) Try this page: test page [mccanless.us] After I installed the patch (without restarting Mozilla), all four example links were available to click on. Clicking on the fourth link, marked "Clicking this could crash your system!!!" did cause Mozilla to go crazy. It kept opening new windows stupidly fast until it crashed. After it died, I restarted it and went back to the page - now three of the links are completely disabled (I can't even highlight them), and the link that does work (the one with the example iframe exploit) has no malicious effect - the iframe no longer shows the Windows tip but is empty instead. So my version of Moz clearly wasn't fixed until it had been restarted. Re:I knew it!!! (Score:4, Informative) Re:Where's the patch for 2000? (Score:4, Informative) 1. type "about:config" in your url bar 2. Find "network.protocol-handler.external.shell" 3. Change value to false Thats all that you need to do to fix it. Re:Mozilla VS IE (Score:4, Informative) Also, if you RTFA, you'd realise this was supposed to have been fixed in a Windows service pack, but isn't. So yes, I blame microsoft :) Problem doesn't exist on any other OS running firefox... smash.
http://slashdot.org/story/04/07/08/2159244/mozillafirefox-bug-allows-arbitrary-program-execution
CC-MAIN-2015-48
refinedweb
6,907
63.59
Next generation GPU API for Python Project description wgpu-py A Python implementation of WebGPU - the next generation GPU API. Introduction In short, this is a Python lib wrapping wgpu-native and exposing it with a Pythonic API similar to the WebGPU spec.. To get an idea of what this API looks like have a look at triangle.py and the other examples. Status The wgpu-API has not settled yet, use with care! - Coverage of the WebGPU spec is complete enough to build e.g. pygfx. - Test coverage of the API is 100%. - Support for Windows, Linux, and MacOS (Intel and M1). - Until WebGPU settles as a standard, its specification may change, and with that our API will probably too. Check the changelog when you upgrade! Installation pip install wgpu The wheels include the prebuilt binaries. If you want to use a custom build instead, you can set the environment variable WGPU_LIB_PATH. You probably also want to install glwf (for desktop) and/or jupyter_rfb (for Jupyter). Platform requirements Under the hood, wgpu runs on Vulkan, Metal, or DX12. The wgpu-backend is selected automatically, but can be overridden by setting the WGPU_BACKEND_TYPE environment variable to "Vulkan", "Metal", "D3D12", "D3D11", or "OpenGL". On Windows 10+, things should just work. On older Windows versions you may need to install the Vulkan drivers. You may want to force "Vulkan" while "D3D12" is less mature. On Linux, it's advisable to install the proprietary drivers of your GPU (if you have a dedicated GPU). You may need to apt install mesa-vulkan-drivers. Wayland support is currently broken (we could use a hand to fix this). On MacOS you need at least 10.13 (High Sierra) to have Vulkan support. Usage Also see the online documentation. The full API is accessable via the main namespace: import wgpu But to use it, you need to select a backend first. You do this by importing it. There is currently only one backend: import wgpu.backend.rs To render to the screen you can use a variety of GUI toolkits: # The auto backend selects either the glfw, qt or jupyter backend from wgpu.gui.auto import WgpuCanvas, run, call_later # Visualizations can be embedded as a widget in a Qt application. # Import PySide6, PyQt6, PySide2 or PyQt5 before running the line below. # The code will detect and use the library that is imported. from wgpu.gui.qt import WgpuCanvas # Visualizations can be embedded as a widget in a wx application. from wgpu.gui.wx import WgpuCanvas Some functions in the original wgpu-native API are async. In the Python API, the default functions are all sync (blocking), making things easy for general use. Async versions of these functions are available, so wgpu can also work well with Asyncio or Trio. pip install -e ., this will also install runtime dependencies as needed. -. - Use pip wheel --no-deps .to build a wheel.. Testing The test suite is divided into multiple parts: pytest -v testsruns the core unit tests. pytest -v examplestests the examples. pytest -v wgpu/__pyinstallertests if wgpu is properly supported by pyinstaller. pytest -v codegenlints the generated binding code. There are two types of tests for examples included: Type 1: Checking if examples can run When running the test suite, pytest will run every example in a subprocess, to see if it can run and exit cleanly. You can opt out of this mechanism by including the comment # run_example = false in the module. Type 2: Checking if examples output an image You can also (independently) opt-in to output testing for examples, by including the comment # test_example = true in the module. Output testing means the test suite will attempt to import the canvas instance global from your example, and call it to see if an image is produced. To support this type of testing, ensure the following requirements are met: - The WgpuCanvasclass is imported from the wgpu.gui.automodule. - The canvasinstance is exposed as a global in the module. - A rendering callback has been registered with canvas.request_draw(fn). Reference screenshots are stored in the examples/screenshots folder, the test suite will compare the rendered image with the reference. Note: this step will be skipped when not running on CI. Since images will have subtle differences depending on the system on which they are rendered, that would make the tests unreliable. For every test that fails on screenshot verification, diffs will be generated for the rgb and alpha channels and made available in the examples/screenshots/diffs folder. On CI, the examples/screenshots folder will be published as a build artifact so you can download and inspect the differences. If you want to update the reference screenshot for a given example, you can grab those from the build artifacts as well and commit them to your branch. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/wgpu/0.8.1/
CC-MAIN-2022-27
refinedweb
829
66.03
10 August 2011 09:16 [Source: ICIS news] LONDON (ICIS)--OMV’s operating profit of its petrochemical segment for the second quarter of 2011 fell by 70% to €11m ($15.7m) from €37m in the same period last year, the Austrian group said on Wednesday. The group’s operating profit for the segment declined because of a six-week turnaround at its 500,000 tonne/year ethylene (C2) plant and other petrochemical units in Schwechat, Austria, OMV said in a statement. “An improvement in olefin margins was insufficient to offset the impact [of the petrochemical shutdown] at the Schwechat refinery,” the company said. The group’s petrochemical sales volumes for the second quarter fell by 27% year on year to 390,000 tonnes, the statement added. However, the company said its “petrochemical margins were very strong in the second quarter and are expected to remain attractive in the second half of the year”. The company’s petrochemical net margin for the second quarter of the year was 19% above that in the second quarter of 2010 and 15% above that in the first quarter of this year, OMV added, without specifying the figures. OMV’s overall second-quarter net profit fell by 20% year on year to €269m, with its oil production affected by the political conflict and instability in ?xml:namespace> “The second quarter brought multiple challenges, some of which we were not able to influence,” said OMV CEO Gerhard Roiss. “The political instability in north Africa and the “Clearly, we will continue to face challenges in the future, but with our updated strategy to be announced in September, we will be strongly positioned to manage them,” Roiss added. (
http://www.icis.com/Articles/2011/08/10/9483861/austrias-omv-petrochemical-profit-falls-on-major-plant.html
CC-MAIN-2015-11
refinedweb
280
52.94
Problem statement In the problem ” Sort Array By Parity II,” we are given a parity array where all the elements are positive integers. The array contains an even number of elements. The array contains an equal number of even and odd elements. Our task is to rearrange the elements of the array in such a way that parity[i] contains an even element when i is even otherwise parity[i] should contain an odd element and then return the new array. Example parity=[1,2,3,4] [2,1,4,3] Explanation: All the possible array that satisfies the condition are: [2,1,4,3] , [2,3,4,1] ,[4,1,2,3] ,[4,3,2,1]. Anyone of these array is correct answer. Approach for Sort Array By Parity II Leetcode Solution The first and basic approach to this problem is to create a new array and then traversing the old array. When we encounter an even element then put it into the even position of the new array and when we encounter an odd element then put it into the odd position of the new array. This approach uses extra space and we can improve our logic using in place rearrangement. The idea is if we put all the even elements in an even position then odd elements will be automatically at an odd position. So we only need to focus on how to put even elements at even position. We will follow these steps: - Initialize variable i with 0 and j with 1. Here i will travel only even position so we will increment its value by 2 every time and j will travel only odd position so we will increment its value by 2 every time. - If parity[i] is odd then we will find a j for which parity[j] is even and then we will swap elements at i and j. - We will do these steps until the value of i is smaller than the length of the parity array. - Return the parity array. Implementation C++ code for Sort Array By Parity II #include <bits/stdc++.h> using namespace std; vector<int> sortArrayByParityII(vector<int>& A) { int n =A.size(); int j=1; for (int i = 0; i < n; i += 2) if (A[i] % 2 == 1) { while (A[j] % 2 == 1) j += 2; swap(A[i],A[j]); } return A; } int main() { vector<int> arr = {1,2,3,4}; vector<int>ans=sortArrayByParityII(arr); for(int i=0;i<arr.size();i++) cout<<ans[i]<<" "; cout<<endl; return 0; } [2,1,4,3] Java code for Sort Array By Parity II import java.util.Arrays; public class Tutorialcup { public static int[] sortArrayByParityII(int[] A) { int n =A.length; int j=1; for (int i = 0; i < n; i += 2) if (A[i] % 2 == 1) { while (A[j] % 2 == 1) j += 2; swap(A, i, j); } return A; } private static void swap(int[] A, int i, int j) { int temp = A[i]; A[i] = A[j]; A[j] = temp; } public static void main(String[] args) { int [] arr = {1,2,3,4}; int[]ans=sortArrayByParityII(arr); System.out.println(Arrays.toString(ans)); } } [2,1,4,3] Complexity Analysis of Sort Array By Parity II Leetcode Solution Time complexity The time complexity of the above code is O(n) because we are traversing the parity array only once. Here n is the length of the parity array. Space complexity The space complexity of the above code is O(1) because we are using only a variable to store answer.
https://www.tutorialcup.com/leetcode-solutions/sort-array-by-parity-ii-leetcode-solution.htm
CC-MAIN-2021-04
refinedweb
592
59.43
CodePlexProject Hosting for Open Source Software I migrated a page done in MVC3 Razor using Routes.cs / HomeController etc. The page works fine in Orchard. Now I created a Widget out of it which obviously gets created BUT it doesn't show the content i.e. the Cube. ? Is it possible if the Create method in migrations.cs is empty ? ? what am I missing here ? Thanks for your time, ed I used below Migrations.cs. using System; using System.Collections.Generic; using System.Data; using Orchard.ContentManagement.Drivers; using Orchard.ContentManagement.MetaData; using Orchard.ContentManagement.MetaData.Builders; using Orchard.Core.Contents.Extensions; using Orchard.Data.Migration; using Orchard.Indexing; namespace ICube { public class Migrations : DataMigrationImpl { public int Create() { return 1; } public int UpdateFrom3() { ContentDefinitionManager.AlterPartDefinition("ICubePart", builder => builder.Attachable()); return 4; } public int UpdateFrom5() { // Create a new widget content type with ourICube ContentDefinitionManager.AlterTypeDefinition("ICubeWidget", cfg => cfg .WithPart("WidgetPart") .WithPart("ICubePart") .WithPart("CommonPart") .WithSetting("Stereotype", "Widget") .Creatable() .Indexed()); return 6; } } } Placement? Oh no, wait, the migration numbers need to be connected. UpdateFrom3 will never run because Create is only returning 1. thanks for returning ... well, I start to think that I'm having an issue with my system, or, ......... with myself ! I understand, that the migration always has to be "UpdateFromN (N being the last return value) and then: return N+1"; (I tried so many variations that I sent a screwd one, sorry). I started from scratch several times with my ICube; using the HomeController.cs and a Routes.cs; this works just fine as a separate page. HOWEVER, when trying to create the 'part' and in turn the 'widget' by adding first "UpdateFrom1 ... return 2" I'm NOT prompted to update my ICube feature! Neither do I get prompted when adding the "UpdateFrom2 ... return 3". So I'm starting to think that I miss something fundamentally or do it in the wrong sequence .... Any hint and your time is appreciated, ed namespace ICube { public class Migrations : DataMigrationImpl { public int Create() { return 1; } public int UpdateFrom1() { ContentDefinitionManager.AlterPartDefinition("ICubePart", builder => builder.Attachable()); return 2; } public int UpdateFrom2() { // Create a new widget content type with ourICube ContentDefinitionManager.AlterTypeDefinition("ICubeWidget", cfg => cfg .WithPart("WidgetPart") .WithPart("ICubePart") .WithPart("CommonPart") .WithSetting("Stereotype", "Widget") .Creatable() .Indexed()); return 3; } } } Once a migration has run, the migration number is recorded in the database, so unless you go into the database and roll back the migration number manually, that particular migration will never run again. Also, if you are using 1.x (not 1.3) then migrations are automatically executed without user intervention. And of course, on all versions, when the module is first enabled, all migrations are run silently. thanks, that clarifies a lot; I was afraid changing that record could mess the system!? Now I can see the 'Content Part' and ' Content type'. But now coming back to my original issue - not seeing my content (ICube) when applying the widget to a Zone. You mentioned originally 'placement' could be the issue (below see my placement.info file). But might be another peace of info could help: In Content Types I have 'ICube' with 'Create new ICube'; when clicking on 'Create new ICube' I get below error. I'm afraid I still do something wrong when creating this widget. ? how should I set placement.info ? PS: I agree I need to learn more about this file! thanks again for your patience, ed 6: <fieldset> Line 7: @Html.LabelFor(widget => widget.LayerId, T("Layer")) Line 8: @Html.DropDownListFor(widget => widget.LayerId, new SelectList(Model.AvailableLayers, "Id", "Name")) <=== error Line 9: </fieldset> Line 10: <fieldset> Source File: d:\MyWebSites\ASGB\Modules\Orchard.Widgets\Views\EditorTemplates\Parts.Widgets.WidgetPart.cshtml Line: 8 <Placement> <Place Parts_ICube="Content:after"/> </Placement> Are you seeing any problems with other widgets and other layers? To familiarize myself with this business I loaded the Bing.Maps feature - which works fine; i.e. I can apply it to any Zone in any page I want. On my ICube feature, regardless on which page / Zone I want to apply the widget, I see the following difference when Checking 'Content' in the dashboard to compare Bing.Maps with ICube: Content Parts: no difference Content Types: no difference Content Items: AS the first item in 'Content Items' I have: 'ICube' and above 'Edit' (the rest of this row is empty). 'ICube' is the title I gave when adding the widget to my Zone. When clicking 'edit' I'm directed to the 'edit widget' screen ( same screen I got when choosing the widget). Another difference in Dashboard News: I have an item 'ICube'. When clicking, I'm directed to the same screen as when clicking 'widget' in the dashboard! As a novice, on my setup, I see three differences: A) The 'create' in data migrations is empty; I guess this is correct because I do not use any DB here. B) my View ICube.cshtml (created in MVC3 VS2010) is in the root of Modules/ICube/views. I.e. I do not have Modules/ICube/Views/parts/ICube.cshtml NOR Modules/ICube/Views/EditorTemplates/parts/ICube.cshtml. Although, I added these directories and did a test with ICube.cshtml in both of them! C) I do not use any drivers nor handlers! to repeat myself; I really appreciate your help, ed If you don't use the DB, you don't need a migration at all. I'm beginning to suspect you forgot placement. Here's a question ... if you're not using the DB, then what is ICubePart? Can you show us the code for the model, handler and driver? ICubePart: I start to realize (learn) that ICubePart is only used in the context of a DB. So I guess my ICubePart got generated because I used a .WithPart("ICubePart") in my migrations.cs. - it seems this is not needed or might even confuse Orchard! As I said in my previous reply: I do NOT use whether a Model (no DB) NOR a Driver / Handler. All I want is running my ICube.cshtml which I developed in MVC3 VS2010 to run under Orchard (essentially these are some <img src="... statements and some jquery code) This runs fine in Orchard using a 'Routes.cs' and 'HomeController.cs' file. Now I would like to generate a 'widget' for this ICube which I can apply to any Zone! PS: I'm more than happy to provide you with the complete ICube 'Module'; where could I send a .zip file!? sorry for my ignorance, but thanks to a very good product, ed If you haven't got a driver or a handler then you're not generating a view. Drivers are used to emit shapes which map to .cshtml files (or can be rendered in code). You can also do this in handlers, but it's better to use a driver because otherwise you can't use Placement at all. You don't need a record for a content part but you still need to generate the view somewhere. well could you tell me how such a Driver would look like?! I copied below Driver.cs from Bing.Maps and get below error (I know this is not professional and really just a guess). ! I must miss something substantial here ! .... need help i.e. more documentation and/or a sample ..., ed CS0246: The type or namespace name 'ICubePart' could not be found (are you missing a using directive or an assembly reference?) PS: before I used the Driver I checked: I can see the ICubePart in 'Content parts'! using Orchard; using Orchard.ContentManagement; using Orchard.ContentManagement.Drivers; namespace ICube.Drivers { public class ICubeDriver : ContentPartDriver<ICubePart> { protected override DriverResult Display( ICubePart part, string displayType, dynamic shapeHelper) { return ContentShape("Parts_ICube", () => shapeHelper.Parts_ICube( // ?????????????????? ); } //GET protected override DriverResult Editor(ICubePart part, dynamic shapeHelper) { return ContentShape("Parts_ICube_Edit", () => shapeHelper.EditorTemplate( TemplateName: "Parts/ICube", Model: part, Prefix: Prefix)); } protected override DriverResult Editor( ICubePart part, IUpdateModel updater, dynamic shapeHelper) { updater.TryUpdateModel(part, Prefix, null, null); return Editor(part, shapeHelper); } } } In which namespace is ICubePart? If not in ICube.Drivers, you would need to add a using directive, like "using ICube.Models;" there is obviously a blackout on my side. As I said I do NOT have a Model, hence I do Not have a namespace ICube.models (Nor do I have a Driver / Handler). So Randompete told me that I need to define a driver (see his last reply). My only reference to ICubePart is in 'migrations.cs' (see below; and full code in earlier thread replies). public int UpdateFrom1() { ContentDefinitionManager.AlterPartDefinition("ICubePart", builder => builder.Attachable()); return 2; } My understandig is that I would define a "public class ICubePart : ContentPart<ICubeRecord> { .... " in the model .... if I would have a DB needing a Model.cs. Again: I'm happy to .zip you my ICube Module ... might be that would be more productive! I would appreciate if you would once read the complete thread to see where I'm strugglin, thanks! thanks for your time, ed Yes, but you're referring to ICubePart in your ICubeDriver. I'm not sure if there is any more cool way to do this, but you could create an empty ContentPart which would make it possible to use your driver with that code. Your driver looks fine. Can you attach a debugger and put a breakpoint in Display? You need a model, plain and simple. ICubePart needs to exist, otherwise you can't reference it. But it doesn't need to have a record. You can just do public class ICubePart : ContentPart { ... } Hi Randompete & all who spent their time, THANKS! all is fine! ... plain and simple an 'empty' Model did it! like below: PS: would it be beneficial to document this scenario somewhere to prevent an similar thread-ordeal'!?!? Or am I the only ignorant! but again, thanks a million, ed using System.ComponentModel.DataAnnotations; using Orchard.ContentManagement; using Orchard.ContentManagement.Records; using System; namespace ICube.Models { public class ICubePartRecord : ContentPartRecord {} public class ICubePart : ContentPart<ICubePartRecord> {} } You shouldn't need the record; ICube part can just be a ContentPart, rather than a Contentpart<ICubePartRecord>. But yes, more documentation can always help. I think the orchardproject.net docs go through the "standard case" which is basically ContentPart<TRecord>, but not the simpler case you have here. well, without the'public class ICubePartRecord' I got also an error: are you missing etc. ... for now I'm experimenting a bit to learn! Because: I found already some different behaviour on this widget compared to the downloaded Bing.Map widget ... but this would be a seperate thread after I'm more familiar! thanks ed Umm yeah that's why I said remove the reference to ICubePartRecord. Basically if you reference a class it has to exist, otherwise you get compiler errors. But ContentPart doesn't need a <TRecord>. ok, I'll appreciate your hint and will apply it the next time thanks again for everything, ed Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://orchard.codeplex.com/discussions/297378
CC-MAIN-2017-09
refinedweb
1,851
60.61
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hello everyone who read this Question. Yesterday we were told at school to make a program that read a string from keyboard and than print the string. The library I have imported and "Inputs.readString("Enter your age");" help me to add the string from my keyboard creating a box where I can acutally type what string x will get. The box should pop up every time i close is or not leave it black. The problem is that even if I give to x a string, the box still pop up.. unsless i stop the program. Is this a bug into the library or is just me coding it wrong? There you can get the library: import iadt.creative.Inputs; String x; do { x=Inputs.readString("Enter your age"); println(x); }while(x!=null); Answers It's going to be pretty hard to help you, since we don't have access to the Inputs.readString()function. However, what you described is exactly what I would expect this loop to do. Notice that you're looping while x is not null, which will always be true when the user enters some text. The Inputs.readString() function is on the link i give. The problem is, when I do while(x==null) or whille(x.equals(null)) and close the box, it wont show the box again to add the string. Somehow it exits the do/while loop. What exactly does the Inputs.readString()return? Does it return null, or does it return an empty String ""? Class Inputs is a collection of staticutility functions which wrap up Java's JOptionPane functionality: This is readString() btW if some1 is too lazy to check that out: That is a good question, when you try to print the string x, for example, in this case prints nothing. ! help The function itself is helping me to attribute the value/string to the variable. @GoToLoop: I'm not lazy. I'm trying to help OP work through this. @portarul23: Try using print statements to check the value of the String. What happens if you do println(x.equals(""))? true if i close the box, false if i type something into the box Okay I figured out. It seems to work with this code Thanks a lot for you help. And sorry for keeping you too long on this thing. Glad you made it! :D Here's my own attempt after copy & paste some of "Inputs.java" and modify it a bit: :ar! @GoToLoop: Want to do my homework next? what happens if you escape or cancel the dialogue box? do you still get an empty string? put a delimiter around your output, catches a lot of errors (including trailing whitespace) println("[" + x + "]"); doesn't help when you've got a string containing "null" though (not a null string, a string containing the word "null") A more elaborate version for the "Inputs.java" library: :bz "Keyboard_Input_Library.pde" "Inputs.java" @KevinWorkman The OP would have gained a lot from your input. Not only did he manage to reach a solution he will have gained knowledge of finding the solution. Nice one! =D> Wow. So you decompiled what my teacher made so I can understand better how the Input work. Thanks a lot for that. I will take a look and try to figure out those things there. I have no words to thank you for this. There is no "decompiling" necessary. The entire code is available at the link you gave us: Here is the entirety of the readString()function you're using: You can also use the Java API to figure out what this will return if the user cancels or doesn't enter any text. The component is a JTextField, so its getText() function will return an empty String instead ofnull`. I have no idea what GoToLoop is trying to help you learn, all I can see is that he's needlessly complicating your teacher's code. I wouldn't recommend using it as a learning tool, but it's really up to you. Now I figured out what my lecturer made for us.. so now I am able to make an Input box without importing my teacher library. Thanks so much to all of you. Not only that you helped me solve my exercise, but you also helped me to figure out how he made the library and how it is working. I am new on this program and I have never used Java. I have only used C++ for like 4 years, so for me, everything looks like C++ combined to Jave... Keep in mind that your teacher might want you to use his libraries.
https://forum.processing.org/two/discussion/13175/do-whille-is-not-working-how-it-suppose-to
CC-MAIN-2022-05
refinedweb
801
83.36
Welcome to the IPython Blocks! This is a grid of colored blocks we can use to practice all sorts of programming concepts. There's a tutorial on IPython Blocks at, but you don't need to read that now; just follow along here. We start by creating our grid and giving it a name. We'll call it... hmm... let's see... how about "grid"? !pip install ipythonblocks from ipythonblocks import BlockGrid grid = BlockGrid(8, 8, fill=(123, 234, 123)) grid We can refer to each cell in the grid by its coordinates from the upper-left corner. The first index is the number of steps down, the second is the number of steps to the right. grid[0, 0] In programming, colors are often referred to by an "RGB tuple". That's a series of three numbers, representing how much red, how much green, and how much blue, respectively. For the grid, those numbers go on a scale from 0 to 255. So pure red is (255, 0, 0), black is (0, 0, 0), white is (255, 255, 255), and so on. If we assign an "RGB tuple" to a cell from the grid, that cell takes on that color. grid[0, 0] = (0, 0, 0) grid[0, 2] = (255, 0, 0) grid[0, 4] = (255, 255, 255) grid[0, 6] = (0, 150, 150) grid.show() Let's not spend all day typing those tedious RGB tuples! This looks like a perfect place for a dictionary. Lucky us, we can get a dictionary with all sorts of colors defined from the ipythonblocks module, the same place we got the BlockGrid. from ipythonblocks import colors colors Wow, looks like somebody used to work at a paint store! OK, let's use some of those in the next row down. grid[1, 1] = colors['Teal'] grid[1, 2] = colors['Thistle'] grid[1, 3] = colors['Peru'] grid.show() Now suppose we want to color a bunch of blocks. Let's use a loop so we don't have to write a line for every single one. row_number = 3 for column_number in [0, 1, 2, 3, 4, 5, 6]: grid[row_number, column_number] = colors['Chocolate'] grid.show() The grid defines a "height" and a "width", these will be handy to work on cells all the way across it. grid.width row_number = 5 for column_number in range(grid.width): grid[row_number, column_number] = colors['Violet'] grid.show() How about columns? And how about painting three columns at once? Let's use nested loops. for column_number in [4, 5, 6]: for row_number in range(grid.height): grid[row_number, column_number] = colors['Crimson'] grid.show() Our grid is looking cluttered. Let's define a function to start over by painting it all one color. def one_color(target_grid, color): for row_number in range(target_grid.height): for column_number in range(target_grid.width): grid[row_number, column_number] = color one_color(grid, colors['LightGreen']) grid.show() A couple of tricks will let our grid change over time. We'll need sleep from the time module, plus the clear_output function from IPython. import time from IPython.display import clear_output for color in [colors['Red'], colors['Green'], colors['Blue'], colors['White'], colors['Purple']]: clear_output() one_color(grid, color) grid.show() time.sleep(1) What if we used grid.show() like before, instead of clear_output? Not telling. Feel free to try it yourself. OK, how about a checkerboard? We could paint the whole grid black, then paint red onto every second square. one_color(grid, colors['Black']) for row_number in range(grid.height): for column_number in range(grid.width): if is_even(column_number): grid[row_number, column_number] = colors['Yellow'] grid.show() Icky, an error message! Because there's no such thing as a function is_even; I just made that up. But we can fix that! def is_even(number): if number % 2 == 0: return True else: return False Now go back up and re-run the earlier cell. And we get... hmm, good for bumblebees, bad for checkers. Let's make a modified version. one_color(grid, colors['Black']) for row_number in range(grid.height): for column_number in range(grid.width): if is_even(column_number + row_number): grid[row_number, column_number] = colors['Yellow'] grid.show() How crazy can we get, using just what we've already learned? Let's make a diagonal color gradient, then pour some bleach on it. base_color = [50, 50, 50] for i in range(200): clear_output() for row_number in range(grid.height): for column_number in range(grid.width): grid[row_number, column_number] = (base_color[0], base_color[1]+row_number*20, base_color[2]+column_number*20) grid.show() base_color[0] += 1 base_color[1] += 1 base_color[2] += 1 time.sleep(0.02) Now it's time to try out some ideas of your own! Here are some suggestions: Use a bigger grid if you need one. Remember at the very beginning when we created the grid? grid = BlockGrid(8, 8, fill=(123, 234, 123)) You can create it again with some different numbers. Here are some more little pieces of code that might be fun elements to work into your projects! Each time you execute this cell, you'll see a different, random shade. from random import randint random_color = (randint(0, 255), randint(0, 255), randint(0, 255)) one_color(grid, random_color) print(random_color) grid.show() Can you make the color depend on the distance from the upper-left corner? Can you draw an arc, for instance? for row_number in range(grid.height): for column_number in range(grid.width): distance_to_corner = (row_number**2 + column_number**2)**0.5 # now what can you do with that number? grid.show() There's a file called ascii8x8.py in this directory that we can use to do ASCII-style art with our blocks. from ascii8x8 import Font8x8 print(Font8x8['p']) It's hard to tell, but that's a list of strings, and if those strings are stacked, they form a rough inversed "p". But we've also provided a function that, using that same data, and given the (row_number, column_number) of a place in the grid, tells us whether that spot should be on (True) or off (False). from ascii8x8 import screen screen('p', 0, 0) What good does that do? Well, here's an example of applying it to the grid: for row_number in range(8): for column_number in range(8): if screen('p', row_number, column_number): grid[row_number, column_number] = colors['Black'] grid.show() So, perhaps you could...
http://nbviewer.ipython.org/github/catherinedevlin/mpwfw_exercises/blob/master/blocks.ipynb
CC-MAIN-2014-10
refinedweb
1,054
77.43
Agenda See also: IRC log <noah> For the minutes: <noah> To summarize my comments on the WSA issue. I don't think they way they and we coordinated was a good model for the future. <noah> When the distinction between reference parameters and reference properties was eliminated, we missed a chance for the TAG to be alerted that there were still plans in some cases to use refParms for identification. <noah> I think the practical implication was that the TAG eventually realized the true state of play later than was ideal. The good news is that we all eventually converged on a mutually acceptable answer, but I don't think the process that led us there was a model for future interactions. <DanC_lap> Vincent, did you take seriously Roy's suggestion to take actions that don't all under any other issue and put them in issues.xml under issue42? That seems particularly appropriate for NDW's action on self-describing web. <DanC_lap> it appeals to me that "self-describing web" would turn into the title of a document that addresses xmlFunctions-34 and some other stuff <timbl> Me too. <DanC_lap> (where is dave? when do we expect him back?) <DanC_lap> (cuz I think xmlVersioning-41 is in the same ball of wax.) (as is media types) found at: DO has 4 action items, HT has one relating to this topic issue started 3 Nov 2003 Text proposed by NW and DO diagram posted at: Scribes notes on-going networking issues <scribe> Scribe: Ed Rice Henry: Norm and I have not put our heads together yet. ... Following the discussion we had on this topic at the last telecon regarding my email. ... I felt that was a useful discussion and will write that up at more length and bring that up with the group again. ... I'll work with Norm to reconsile what he's been drafting vs what I've been drafting. Dan: I need this for my presentation.. Henry: You have the message I've sent to date. V: I guess if there is no progress, we cant really go further today. Dan: Norm's action continues. V: yes, but we need a new date. <DanC_lap> (HT, do you want a separate action?) V: We have a plan for this issue, Henry will develop his email further and rationalize with Norm. <DanC_lap> (HT, I guess you're not on the net) Dan: Is there any current communication? Noah: I thought so. V: no, the last time we discussed it at length was our last f2f. and thants when Norm got the issue. <DanC_lap> ( ; IBM paper is in .pdf. sigh) <timbl> <DanC_lap> ACTION: TV to summarize history of DTD/namespace/mimetype version practice, including XHTML, SOAP, and XSLT [recorded in] <DanC_lap> Ed, that action should be read to TV to be sure that's what he had in mind. perhaps not critical, but it would be good <Zakim> DanC_lap, you wanted to note that the WebOnt WG decided "xml, and get the rest from the instance" wasn't enough <Zakim> timbl, you wanted to suggest that the mime type needs to (a) hint that it is XML and (b) say what framework is in use out of CDF, RDF, and SOAP, It is NOT useful to try to give an <Zakim> ht, you wanted to say what Noah was going to say HT: I guess I've always felt perplexed why we ever introduced the foo+ xml design pattern for mime types because it seems to promise what it hasnet delivered. ... Its not clear to me why 3.. I have a strong temptation to say we should get rid of the 75 or so today and go to zero. Tim: no. HT: it seems the cleanest way of doing this would be a way to indicated the way of doing the framework, but as Noah said it should be in the document itself. ... then you can look inside the document itself to find out what the framework is. ... Which is differant than what I think your saying. <Zakim> noah, you wanted to finish what I was saying Noah: I think whats confusing is in part that media types are serving two parts. one part is a set of instructions that describe what I can do with this data. The other thing is things that give us 'early warnings' of what we'll find in the document. TV the media type should be consistant, or else you wont know who to trust. <Zakim> DanC_lap, you wanted to note the relevant IETF forum on XML mime types and to say, if timbl hasn't already, that almost nobody looks inside a document to see whether it's SOAP or <noah2> Actually, I would prefer to say that media-types serve two "roles". Dan: almost nobody looks inside a document to see if they should do HTML or soap proessing on it. <noah2> 1) Just because something looks like an XML document doesn't mean it isn't text or something else that accidently looks like XML. So, we use the media type to give you permission to process as XML. <noah2> 2) The other thing we use media types for is to let us discuss the nature of the document before we look in it. Content negotiation is an example of this. the majority of the soap implementations are of soap 1.1 which doesnt have a media type. <Zakim> timbl, you wanted to clarify why you need a framework indicated in the mime type (visibility) and why you need svg+xml. <noah2> Sometimes, in support of media type usage style #2, you include redundant information in the media type. <timbl> There are various markets for XML here. There is a market where namespaces mixing is not interesting to people, and so adding the namespace is a pain, and so it could be a default for the MIME type. There is a market for complex expandable systems like the frameworks. <noah2> Dan: one thing definitely look into soap documents to find out is whether it's a purchase order or someone's job application. <DanC_lap> it was DO, not DanC, that said "the majority of soap..." <noah2> That's not in the root element, but it's certainly the kind of thing you might expect in a media type. Tim: There are differant markets for xml.. 1) an invention team who are not really interested in creating name spaces, they're mostly using html tags. 2) 2) Others just want to put some data in a file and they just want to use it as data. Dave: so one of the ways I think would be really usefull would be to cast people back to Roy's thesus on visibility. does it go in the xml, does it go in the saop? <timbl> That was but 1). 2) if folks who do want extensibility and they use a fraemwork. At what point does a certain amount of software have to know? and we do have some examples where this is needed and mandated. VT: in some sense its a protocol riding on a protocol. Tim: a good thing that some of us can do is to check and make sure we have the consistancy. HT: Dan, I've been waiting for you to throw in your remark from email that profile may be the right way to do all this stuff. ... to stipulate that there may be some generalized types of content. It seems to me that you suggested that the profile attribute would do much more work in the future then in the past. Dan: I thought I was writing about fragments.. ... are we on 41? V: no, we're still on self describing web. Tim: so how does this TAG work? ... so I proposed that we say that its wise to point to which framework we're using. ... I'm proposing a form of a finding is a tree structure to figure out what a document means. ... I wonder if I could get a consensus from the current tag. Dave: I'm really looking to see why soap+xml is better/worse than .. why did we just pick the two and why is it not working. <DanC_lap> (is the "it" that timbl is referring to recorded? I wonder if it would be useful to slow down. On the other hand, I'm content to move to a separate agendum, since we have an action on TV to start writing the history/extant-practice of this stuff) <DanC_lap> TV: one pragmatic reason the [SOAP folks?] might have done inside-xml-doc rather than media type is that to get a media type, you have to go ask somebody else Noah: We're forgetting that most xml is used between two appliances which both ends know the formats and types. <DanC_lap> DO: I agree the decentralized aspect of namespaces is a big part of it Henry: I sparked on VT saying that context negotiation and indexing need to live happily together. VT: when you pull up the url with a default language page it then pulls up the page with all the other languages also. This is ok, because the content is symantically the same. HT: That seems slightly sub-optimal. VT: Yes, but its better than telling the crawlers to crawl with a specific user/role. HT: You have to do the same work twice. VT: well, the system as a whole is optimal. ... on the device level, the device to device is not as complex. Dan: So we were going to move on but then Tim you thought we had consensus but then we didn’t have enough consensus. Tim: The proposal is the mime types is that the mime types if its html its just html, if it’s RDF or CDF then the mime type is not used. Noah: I think it’s a plausible position but no more so than the other positions. My feeling is that it’s a tough call. TimBl: if you want to pursue this, but I’m prepared to talk about a hypothetical process where soap is used in an envelope and then you want to be able to stick on this is a .png or an image or something inside. When there is a single thing inside a soap, I can imagine that’s a useful thing to open the file because you’ll want to retrieve it but I’m worried about trying to combine what people are doing with soap with what people are trying to do with get. Noah; I heard you say is that we should architecturally try and roll this out. And I just said ‘well, ruling that out for all time just didn’t seem prudent’ Henry: I just don’t see that we have that option, there isn’t a way to control this. Noah: the reason its limited to x is because we say it . VT: but we have redundancy. There is a finite number of strings someone has come up with that we call mime types. Noah: The reason that you identify CDF as a framework is .. Timbl: CDF covers a lot of user interfaces, RDF covers a lot of data. Noah: I felt like I understood the proposal, but I didn’t say I agreed so an email from you describing your position I think that would help. <scribe> ACTION: TIMBL to To write a short email to make his point so we capture this for future. [recorded in] Noah: I'm hearing Tim say there is a deep differance between frameworks and non-frameworks but now I'm hearing thats not the case and its really a continum. Dave: a language designer has to specify the level of ignoring the tags you dont use. Question from ann (audience); The XRI proposal from oasis, what are we going to do? Have you ever heard back from that TC? Dan: This is not a short question... <process discussion around how oasis/w3c work together> informative but process issues which are outside of the TAG charter. <noah> Final representations of "Rule of Least Power" are now world-readable at and). We are ready for Vincent to link this as an approved TAG finding. This is scribe.perl Revision: 1.127 of Date: 2005/08/16 15:12:03 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/usability/visibility/ No ScribeNick specified. Guessing ScribeNick: Ed Found Scribe: Ed Rice WARNING: No "Present: ... " found! Possibly Present: DO Dan DanC_lap Dave HTML Henry MIME Noah RDF SOAP So TV There Tim Timbl VT Vincent already are dorchard etc have ht in-band indication is joined let media namespace-framework namespaces noah2 of parames per persue pushing require separate signal such tagmem that to type types up-hill used water what with: 27 Feb 2006 Guessing minutes URL: People with action items: timbl tv WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
http://www.w3.org/2006/02/27-tagmem-minutes.html
CC-MAIN-2015-35
refinedweb
2,183
71.04
Release Notes¶ PyFilesystem has reached a point where the interface is relatively stable. The were some backwards incompatibilities introduced with version 0.5.0, due to Python 3 support. Changes from 0.4.0¶ Python 3.X support was added. The interface has remained much the same, but the open method now works like Python 3’s builtin, which handles text encoding more elegantly. i.e. if you open a file in text mode, you get a stream that reads or writes unicode rather than binary strings. The new signature to the open method (and safeopen) is as follows: def open(self, path, mode='r', buffering=-1, encoding=None, errors=None, newline=None, line_buffering=False, **kwargs): In order to keep the same signature across both Python 2 and 3, PyFilesystems uses the io module from the standard library. Unfortunately this is only available from Python 2.6 onwards, so Python 2.5 support has been dropped. If you need Python 2.5 support, consider sticking to PyFilesystem 0.4.0. By default the new open method now returns a unicode text stream, whereas 0.4.0 returned a binary file-like object. If you have code that runs on 0.4.0, you will probably want to either modify your code to work with unicode or explicitly open files in binary mode. The latter is as simple as changing the mode from “r” to “rb” (or “w” to “wb”), but if you were working with unicode, the new text streams will likely save you a few lines of code. The setcontents and getcontents methods have also grown a few parameters in order to work with text files. So you won’t require an extra encode / decode step for text files.
http://pyfilesystem.readthedocs.io/en/latest/releasenotes.html
CC-MAIN-2017-47
refinedweb
290
75.61
). Communicating Process Improvement Benefits A large part of my job is process improvement—working with various departments and groups in our organization (K-12 education) to see where technology would provide cost savings and “better ways” of doing things. I really do enjoy it and like to see people empowered with tools to do their job better. We’re slowly running into a culture clash—those who are holding on desparately to the paper and those who are forcing change. Unfortunately, those “forcing change” don’t want to force it—they want our department (or anyone else) to be the “bad guys.” That’s cool—I can be the bad guy. Unfortunately, it’s difficult to sell to the individuals a different process when those at the highest levels (their bosses) can’t sell it. Here’s an example we’re running into: For years, thousands of staff members have used paper and pencil bubble sheets to do grading. Paper, handwritten grade cards went home to students (NCR paper copies each grading cycle). These copies were lost, destroyed, and unreadable by the end of the year. In addition, the paper, labels, and copies cost into the tens of thousands per year. It wasn’t a good situation (considering it’s 2008) for parents or teachers. A new, online grading system was requested by leadership; throughout it’s development both the customer and the focus group of the customer’s customers (those thousands of staff members) tested, worked hand-in-hand, and approved the application, it’s layout, and performance metrics. However, after rolling it out, the complains came rolling in. Filling a report out takes more time than “ticking off bubbles on paper” per student. Latency in the network, across VPN for home users, and simply using the application puts it at about 35 seconds per student (1–2 seconds to load, 30 seconds for the teacher to mark their grades using the drop downs, 2–3 seconds to save and return validation). The server side runs well—the app runs fast, but we can’t push it down the wire any faster. There’s latency in there that developers can’t attack—especially in web applications. When we try to reason and explain the technology, we’re countered with “well, in 2–3 seconds, I can put 5–10 check marks on a piece of paper—that’s just wasting my time.” And of the postive benefits of the system? So far, they’re not to be seen—those pushing for it can’t (or won’t) stand up for it. At this point, should it be the developers to advocate the application if the customers (the end users are the customers of our customer) don’t want to deal with the “pressure” of implementing change? If the qualifiers for the process improvement are met, but adoption wavers because of lack of support—who’s call is that? Just curious what everyone out there thinks. At this point, I’m in coaching mode with our customers to work with them on how to sell change and get buy in from their customers… Thank you… Last night was my final class session and marks for my Masters degree. It’s been a long and tiring twenty-two months—hours and hours of research, hundreds of pages of reports, and hours of presentations have kept most of bogged down. For my friends, who have tolerated my absence, my fatigue, my more-than-usual crankiness over the past two years and prayed for it to be over. 🙂 For my family, who have supported me the entire way and ensured that I did NOT roll over and get another 10 minutes of sleep. For my professors, who have tolerated my insistent questioning (some better than others) and challenging as I know I’m probably one of the most annoying students you’ll ever face. Finally, for my peers, who have been great to work with this entire time. Especially my cohort group, the summation of our skills helped make our work not only good, but excellent. A graduate degree is more of what you make for yourself, and you’ve helped me strive to always learn the most I could from each and every challenge. Amidst it all, we found time to laugh and enjoy ourselves—even if it was 1 AM in the morning with the deadlines just hours away. Thank you. BOO – The Winter Project My winter projects usually revolve learning another programming language, brushing up on some non-programming skills (architecture, graphic design, etc.), or learning a human language (which I’m partially doing by reteaching and catching back up on my Japanese). I wanted to learn another programming language and have heard a lot of interesting discussion in the community about BOO. Resources: - BOO Language Guide - BOO Downloads - SharpDevelop 2.2 (you’ll need this) < Comes pre-packaged with BOO support I have a minor project coming up that I’m going to use as a guinea pig for BOO—really get in and force myself to learn something rather than dinking with code snippets. So, what’s the code look like? Here’s a quick example from something I was playing with while in a meeting today. It’s a quick example of a console application that creates a new generic list, adds a few items, and spits them out. From using statements to the final squiggly bracket “}”, it’s 22 lines. In C#: using System; using System.Collections.Generic; namespace BooComparison { class Program { static void Main(string[] args) { List<string> names = new List<string>(); names.Add(“This”); names.Add(“is”); names.Add(“a”); names.Add(“test!”); foreach (string name in names) Console.WriteLine(name); Console.ReadKey(); } } } In Boo, it’s a bit shorter at 15 lines since the Main method and class are not necessary. You’ll notice the Python and slight VB.NET references and, for me, breaking the habit of the closing semi-colon. namespace BooTutorial import System import System.Collections.Generic names = List[of string]() names.Add(“This”) names.Add(“is”) names.Add(“a”) names.Add(“test!”) for name as string in names: Console.WriteLine(name) Console.ReadKey() You’ll notice that the for each loop is a bit different and, something that threw me when I first started messing with Boo, is that it’s position sensitive. The Console.WriteLine is part of the for each statement BECAUSE it’s indented (there are no { } in Boo that I’ve found). Overall, it looks interesting and should be an challenge to learn. Will I throw my C# away and start scaring our customers with Boo? Ehh, I doubt it; however, it’ll be an interesting tool to add into the collection. The most annoying downfall right now? There isn’t a plugin for Visual Studio, so I’m using the open source SharpDevelop tool. It’s not a bad tool (actually, quite awesome for being open source and portable), but things like R# and such are missing and make me sad—I’m used to those keyboard keys.
https://tiredblogger.wordpress.com/category/education/
CC-MAIN-2017-34
refinedweb
1,184
61.06
Learn, through a practical tutorial, how to use Vuex to manage the state of your Vue.js apps the right way. TL;DR: As Single-Page Applications (SPAs) are fast becoming the de-facto way of developing frontend applications, one of the major issues that developers are facing is how they manage state between all the components spread around their applications. Vuejs apps are not an exception to this. To solve this problem, you will learn, through a practical guide, how to use Vuex to manage the state of your Vue.js apps with ease. Check out this article if you need the reference implementation of the app discussed here. "Looking forward to learn Vuex to manage the state of your @vuejs apps? This is the place." TWEET THIS Prerequisites Before you begin going through this article, there are two technologies that you should already be familiar with: JavaScript, and Vue.js. You might be able to follow along without having ever used Vue.js before. However, if you don't know JavaScript, you will have a hard time (most likely). Besides this knowledge, you will need Node.js and NPM installed in your machine. Please, check out this resource to install these tools if you don't have them already (by the way, NPM comes bundled into Node.js). Vuex in Action In this section, you learn, through a practical guide, how to use Vuex to build a simple Bitcoin dashboard simulation that shows the current price of Bitcoin and updates it as the time passes. Besides the price, your app will also show the percentage increase and the price difference in real-time. Lastly, your app will show the history of the previous four prices for reference. To make things organized, you will break your app into four components: - The main component: this one already exists and it is called App. - A component that shows the current Bitcoin price and how it differs from the previous one (this one is the green box in the image below). - A component that shows the percentage change between the current price and the last one, and the time when the update occurred (this one is the blue box in the image below). - A component that shows the Bitcoin pricing history (this one is the table below the blue and the green boxes). In the end, you will get an app that looks like this: Note: You won't use real Bitcoin prices here. You will code some fake sample and then generate random values based on this sample. Scaffolding a new Vue.js App To scaffold your new Vue.js app with ease, you will use the Vue CLI tool. As such, if you don't have this CLI already installed in your development environment, open a terminal and run the following command: npm install -g @vue/cli After a successful installation, navigate to the directory in which you want to keep your project and run the command below to create a new Vue.js app: vue create bitcoin-dashboard Running this command will prompt you to select a preset and, if you have Yarn installed, will ask you if you prefer to use it instead of NPM. Feel free to choose the answers that you prefer, but this article will use default for the preset and NPM as the dependency management tool. After making your choices, the Vue CLI tool will scaffold your app and install the basic dependencies. When done, you can move into your new project and run it to see if everything is working properly: # move into your new app cd bitcoin-dashboard # run it with NPM npm run serve Now, if you open in your browser, you will see the following screen: Then, to shut down your server, you can hit the following keys: Ctrl + C. Installing Vuex and Other Dependencies Alongside with the basic dependencies in your Vue.js app, you will also need to install three other dependencies: Bootstrap, Vuex itself, and Font Awesome (the free version). To install them, make sure you are on the project root and run the command below: npm install --save \ bootstrap vuex @fortawesome/fontawesome-free Creating the Vuex Store To begin setting up your Vuex store, you will create a new directory called store inside src. You will use this directory to hold all files related to your Vuex store. Then, as mentioned before, you will start your Bitcoin app with a fake sample. So, for that, you will need to create a file called prices.js inside the store directory and add the following code to it: const now = Date.now(); const twoSeconds = 2000; const prices = [ { amount: 7322.89, timestamp: now }, { amount: 6322.02, timestamp: now - twoSeconds, }, { amount: 5222.64, timestamp: now - (twoSeconds * 2), }, { amount: 5242.61, timestamp: now - (twoSeconds * 3), } ]; export default prices; As you can see, in the file that you just created, you are defining and exporting an array of historical Bitcoin prices and the time each one was created. Note that you extract the exact date ( Date.now()) that your app starts running to define the time that the last price was created. Also, you create a constant called twoSeconds and use it to calculate the time the other fake prices were created. In other words, you are creating a module that defines an array with four prices: 7322.89: the most recent price that gets the date when the app starts running; 6322.02: the second most recent price that gets the date when the app starts running minus two seconds; 5222.64: the third most recent price that gets the date when the app starts running minus four seconds; - and 5242.61: the oldest price that gets the date when the app starts running minus six seconds. With this file in place, the next thing you will need to do is to create a file where you define the Vuex actions available in your application. To do so, create a file called actions.js inside the store directory and add the following code to it: const actions = { UPDATE_PRICE: 'UPDATE_PRICE' }; export default actions; This file only contains a single action because that is the only thing you will need in your application. Having your actions in a separate file this way helps organize your code better and prevents duplicating string names across your app. Now, you can finally create your store. For that, create a file called index.js inside src and add the following code into it: import Vue from 'vue'; import Vuex from 'vuex'; import prices from './prices'; // configure Vuex for modules Vue.use(Vuex); const store = new Vuex.Store({ state: { prices: prices }, getters: { currentPrice: state => { return state.prices[0]; }, previousPrice: state => { return state.prices[1]; }, percentageIncrease: (state, getters) => { const currentAmount = getters.currentPrice.amount; const previousAmount = getters.previousPrice.amount; return ( ((currentAmount - previousAmount) / previousAmount) * 100 ).toFixed(2); }, difference: (state, getters) => { const currentAmount = getters.currentPrice.amount; const previousAmount = getters.previousPrice.amount; return (currentAmount - previousAmount).toFixed(2); } }, mutations: { UPDATE_PRICE(state, newPricing) { // remove the oldest price state.prices.pop(); // add the new price state.prices = [newPricing, ...state.prices]; } } }); export default store; export {default as actions} from './actions'; As a lot going on in this file, you will be better off learning it gradually. First, you are importing all modules required: import Vue from 'vue'; import Vuex from 'vuex'; import prices from './prices'; After that, you are configuring your Vue.js to use Vuex: Vue.use(Vuex); Then, you are creating a new instance of a Vuex store ( new Vuex.Store) with the configuration object where all your settings are declared. In the configuration object passed to Vuex, the first thing you are defining is the default state object with your fake prices: state: { prices: prices; } Then, you are creating a getter to compute and return the current Bitcoin price ( currentPrice): currentPrice: state => { return state.prices[0]; }, After that, you are creating another getter for the previous bitcoin price ( previousPrice): previousPrice: state => { return state.prices[1]; }, The, you are creating the third getter for the percentage difference between the current price and the previous one ( percentageIncrease): percentageIncrease: (state, getters) => { const currentAmount = getters.currentPrice.amount; const previousAmount = getters.previousPrice.amount; return ( ((currentAmount - previousAmount) / previousAmount) * 100 ).toFixed(2); }, And, finally, you are creating the last getter for the difference between the current and previous price ( difference): difference: (state, getters) => { const currentAmount = getters.currentPrice.amount; const previousAmount = getters.previousPrice.amount; return (currentAmount - previousAmount).toFixed(2); } Next, you are defining the mutations of your Vue.js app. In this application, you only need one mutation which updates the price of Bitcoin: mutations: { UPDATE_PRICE(state, newPricing) { // remove the oldest price state.prices.pop(); // add the new price state.prices = [newPricing, ...state.prices]; } } As you can see, the UPDATE_PRICE mutation receives the new price data in its payload, removes the oldest price from the prices array, and inserts a new one at the beginning of the array. Lastly, you are exporting your store and also your actions so other component components can use it: export default store; export { default as actions } from './actions'; And that's it! Your Vuex store is fully complete. Defining your Vue.js Components Now that your store is all set and ready to go, you will start defining your application components. As mentioned before, your app will contain three components (besides the main one that is called App). One to show the current Bitcoin price and the difference between the previous and current price. One to show the percentage difference between the current and previous Bitcoin prices and that also display when the last update occurred. One component to show the details of a Bitcoin price. The first one that you will define is the one that will display the current Bitcoin price and the difference between the previous and current price. To define this component, create a file called CoinPrice.vue inside the src/components directory and add the following code to it: <template> <div id="counter" class="bg-success text-white"> <div class="row p-3"> <div class="col-2"> <i class="fas fa-dollar-sign fa-4x"></i> </div> <div class="col-10 text-right"> <h2>{{format(price.amount)}}</h2> ${{difference}} </div> </div> </div> </template> <script> export default { name: 'CoinPrice', computed: { price() { return this.$store.getters.currentPrice; }, difference() { return this.$store.getters.difference; } }, methods: { format: price => { return price.toFixed(2); } } }; </script> Note: You can also remove the HelloWorld.vuefile that Vue.js created by default for you. You won't use the component defined in this file. As you can see in the <script/> section, the file you just created defines a component called CoinPrice. This component uses two computed properties that retrieve state from the Vuex store: price, which gets the value from $store.getters.currentPrice; and difference, which gets the value from $store.getters.difference. Besides these properties, the file also defines a method called format to return a cleaner price for display in the component template. Now, in relation to the <template/> area, this file is defining a main <div/> element (with the bg-success and text-white Bootstrap classes to make it look better) with a single child that is a Bootstrap row and then it is splitting the content of this element into two sections: col-2: a section that shows a big ( fa-4x) Dollar sign icon ( fa-dollar-sign); col-10: a section that will show the current Bitcoin price ( amount) and the differencebetween the current the last price. After defining this component, you will create a similar one that will show the percentage change between the current and the last price and when this change occurred. To define this one, create a file called PercentChange.vue inside the src/components directory and add the following code to it: <template> <div id="counter" class="bg-primary text-white"> <div class="row p-3"> <div class="col-2"><i class="fas fa-percent fa-4x"></i></div> <div class="col-10 text-right"> <h2> {{percentageIncrease}} <small>%</small> </h2> {{formatTimestamp(currentPrice.timestamp)}} </div> </div> </div> </template> <script> export default { name: 'PercentChange', computed: { percentageIncrease() { return this.$store.getters.percentageIncrease; }, currentPrice() { return this.$store.getters.currentPrice; } }, methods: { formatTimestamp: timestamp => { const currentDate = new Date(timestamp); return currentDate.toString().substring(16, 24); } } }; </script> The component you just defined is quite similar to the previous one. The <template/> structure is basically the same: one big icon (a fa-percent icon in this case) and two values (the percentageIncrease and the timestamp now). Also, the <script/> section is similar, the difference is that now it is showing other relevant data. Now, the last component you will define is the one that will show some historical Bitcoin price. To define this one, create a file called PriceItem.vue inside the src/components directory and add the following code into it: <template> <li class="list-group-item"> <div class="row"> <div class="col-8">${{formatPrice(price.amount)}}</div> <div class="col-4 text-right">{{formatTimestamp(price.timestamp)}}</div> </div> </li> </template> <script> export default { name: 'PriceItem', props: { price: Object }, methods: { formatTimestamp: timestamp => { const date = new Date(timestamp); return date.toString().substring(16, 24); }, formatPrice: amount => { return amount.toFixed(2); } }, }; </script> This component is even simpler than the other two. All this one is doing is defining a <li/> element with the formatted price of some historical value (it can be the current price or an older one) and the time it was created. You will see how to use this component in the next section. Wrapping Up You are almost done. Now it is just a matter of opening the App.vue file and replace its code with this: <template> <div id="app" class="container"> <div class="row"> <div class="col-12"> <h1>Dashboard</h1> </div> </div> <div class="row"> <div class="col-sm-6"> <CoinPrice/> </div> <div class="col-sm-6"> <PercentChange/> </div> </div> <div class="row mt-3"> <div class="col-sm-12"> <div class="card"> <div class="card-header"> Bitcoin Pricing History </div> <ul class="list-group list-group-flush"> <PriceItem v-bind: </ul> </div> </div> </div> </div> </template> <script> //Components import CoinPrice from './components/CoinPrice.vue'; import PercentChange from './components/PercentChange.vue'; import PriceItem from './components/PriceItem.vue'; //Store import store, {actions} from './store'; export default { name: 'app', components: { CoinPrice, PercentChange, PriceItem }, store, computed: { prices() { return store.state.prices; } }, created: function () { setInterval(this.triggerNewPrice, 3000); }, methods: { triggerNewPrice: () => { const diff = (Math.random() - Math.random()) * 10; const randomNewPrice = store.getters.currentPrice.amount + diff; store.commit(actions.UPDATE_PRICE, { amount: randomNewPrice, timestamp: Date.now() }); } } }; </script> <style> @import "../node_modules/bootstrap/dist/css/bootstrap.min.css"; @import "../node_modules/@fortawesome/fontawesome-free/css/all.min.css"; #app { font-family: "Avenir", Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; color: #2c3e50; margin-top: 60px; } </style> The new version of the App component, although lengthy, is quite simple. The <template/> section starts by defining a title ( <h1>Dashboard</h1>). After that, it defines a row where it displays two components: <CoinPrice/> and <PercentChange/>. Lastly, it defines a Bootstrap card ( <div class="card">) where it shows the Bitcoin Pricing History. To create this history, the component iterates over each price ( v-for="price in prices") to create multiple PriceItem elements. Then, the <script/> section starts by importing all the components you defined in the previous section alongside with your Vuex store and the actions you created for it. After that, you make your components available to your app by adding them to the components property of your app. You also make your store available by adding it to the App component definition. By importing your store this way, you make it automatically available to all child components of your App component. After that, three things occur: - You add a computedproperty called pricesto return all the prices available in your Vuex store. - You add a hook into the createdevent of the lifecycle of your Appcomponent to trigger a repetitive task that runs every three seconds. This task starts the creation of new random prices: setInterval(this.triggerNewPrice, 3000);. - You define the triggerNewPriceinside the methodsproperty of your Appcomponent to make the function available to it. This function simply generates a random new price and calls the store's commitmethod with the action to mutate the price ( UPDATE_PRICE) issuing with it the new random price as a payload. Lastly, in the <style/> section, you make your app import the css files of Bootstrap and Font Awesome for use in your application. Oh, and you also add a few custom styles for aesthetics. Great! With these changes, you are done with your new Vuex app. Time to test it. To run the application, issue the following command in a terminal (just make sure you are in your project root): # from the project root npm run serve Then, open up your browser and point it to. There, you should see the app running successfully and the prices updating every three seconds. The percentage difference and price difference should also update: Cool, isn't it!? "I just learned how to Vuex to manage the state of @vuejs apps. Really cool!" Conclusion In this article, you learned, through a practical guide, how easy it is to use Vuex to manage the state of Vue.js apps. What is cool about Vuex is that, just like Vue.js, it offers a very clean, easy to use, and declarative API for managing state in your applications. Its simplicity does not take anything away from its effectiveness in managing state, regardless of how complex the application is. What is your opinion about Vuex? Are you planning on using it in your next production-ready app? Besides that, what is your opinion about the article itself? Let us know!
https://auth0.com/blog/state-management-with-vuex-a-practical-tutorial/
CC-MAIN-2018-51
refinedweb
2,951
64.41
Creating an MCI MIDI Class This article was contributed by Elmue. Environment: pure C++. Runs on Win 95/98/ME/NT/2000/XP. Introduction cSound is a really tiny and very easy to use C++ class to play *.WAV, *.MID, and *.RMI files. You can add it to your project without changes. To play a Wave, MIDI, or Riff-MIDI file, you need only one function call!!! It will play MIDI via the MIDI Mapper. Which MIDI device is used by the mapper depends on the settings in Control Panel --> MultiMedia. Try the different devices; they sound extremely different! I added some specific MIDI files to the download to check. The Main Demo project is written in MFC but the cSound class is completely free of MFC. MIDI Without DirectX If you search the Internet for source code to play MIDI, you will find a lot that require DirectX to be installed on the user's computer. Advantages of Using DirectX to Play MIDI - If you select the DirectX Microsoft Synthesizer (default device), it will be possible that multiple applications can play MIDI at once. - If you have a very CHEAP sound card, the Microsoft Synthesizer will sound better than the driver of your soundcard. Disadvantages of Using DirectX to Play MIDI - Every user of your software needs to have DirectX installed. On Windows 95 and NT, he will always have to update DirectX. Users of Windows 98 and 2000 will also need to update DirectX if your application requires DirectX 7 or 8. The download of DirectX is unacceptable for modem users (> 12 Megabytes). - The documentation of DirectX in the MSDN is poor, poor, poor! And, if you even want to write for an older DirectX version (such as 5 or 6) to allow users of Windows 98 and 2000 to use your application without updating DirectX, you will find NOTHING about that in the actual MSDN! Microsoft completely removed the older documentation! - When your application initializes DirectX, a dozen DLLs are loaded; they consume about 4 megabytes of RAM. On a Pentium 100, this loading of DLLs takes three seconds. - After playing a MIDI file, DirectX does NOT automatically free the used MIDI device. This means that other applications (such as WinAmp) canNOT access it. Your software has to take care to remove the port after playing the file. The problem is that the sound is played asynchronously and you don't know when DirectX will finish playing. It is very complicated to find out the length of a MIDI sound in advance because the tempo can change multiple times in a MIDI file. So, you could check with a timer in certain intervals if the sound is still playing (IDirectMusicPerformance::IsPlaying()) and than remove the port—but this is awkward! - If you have a middle-class or a good soundcard (such as Soundblaster Live), you will find that the Microsoft Synthesizer (default device) sounds awful! You have to insert extra code (IDirectMusic::EnumPort()), a combo box, code to store the user settings, and so forth, to allow the user to choose a MIDI device, which sounds better. My cSound class does not need that because it uses the device that the user has selected in the Control Panel. Using the Code Calling cSound cannot be easier. It's only one function call: cSound::PlaySoundFile(Path); The cMIDIDemoDlg.h file: private: cSound i_Sound; The cMIDIDemoDlg.cpp file: void CMIDIDemoDlg::PlayMIDIOrWav(char *p_s8PathToSoundFile) { char s8_Buf[300]; DWORD u32_Err; if (u32_Err=i_Sound.PlaySoundFile(p_s8PathToSoundFile) ) { // errors defined in cSound.h if (u32_Err == ERR_INVALID_FILETYPE) strcpy(s8_Buf, "This filetype is not supported!"); else if (u32_Err == ERR_PLAY_WAV) strcpy(s8_Buf, "Windows could not play the Wav file!"); else if (u32_Err == MCIERR_SEQ_NOMIDIPRESENT) strcpy(s8_Buf, "There is no MIDI device installed or it is used by another application!"); else { // translate errors from WinError.h if (!FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM, 0, u32_Err, 0, s8_Buf, sizeof(s8_Buf), 0)) { // translate errors from MMsystem.h if (!mciGetErrorString(u32_Err, s8_Buf, sizeof(s8_Buf))) { sprintf(s8_Buf, "Error %d", u32_Err); } } } MessageBox(s8_Buf, "Error", MB_ICONSTOP); } } The cSound Class The cSound.h file: #include "Mmsystem.h" #define ERR_INVALID_FILETYPE 50123 #define ERR_PLAY_WAV 50124 class cSound { public: cSound(); virtual ~cSound(); DWORD PlaySoundFile(char *p_s8File); void StopSoundFile(); }; The cSound.cpp file: /**************************************** Plays Wav or (Riff) MIDI file ****************************************/ DWORD cSound::PlaySoundFile(char *p_s8File) { // It is important to check whether the file exists! // On Windows, NT PlaySound() returns TRUE even if the file // does not exist! // Then PlaySound() makes the PC speaker beep!!! // mciSendString("open...") also gives an absolutely stupid // error message if the file does not exist! DWORD u32_Attr = ::GetFileAttributes(p_s8File); if (u32_Attr == 0xFFFFFFFF || (u32_Attr & FILE_ATTRIBUTE_DIRECTORY)) return ERROR_FILE_NOT_FOUND; // Get file extension char *p_s8Ext = strrchr(p_s8File, '.'); if (!p_s8Ext) return ERR_INVALID_FILETYPE; if (stricmp(p_s8Ext, ".wav") == 0) { StopSoundFile(); // PlaySound() is very primitive: no Error Code available if (!PlaySound(p_s8File, 0, SND_FILENAME | SND_ASYNC)) return ERR_PLAY_WAV; return 0; } DWORD u32_Err; if (!stricmp(p_s8Ext, ".mid") || !stricmp(p_s8Ext, ".midi") || !stricmp(p_s8Ext, ".rmi")) { StopSoundFile(); static char s8_LastFile[MAX_PATH] = ""; // The mciSendString("open...") command is slow // (on Windows NT, 2000 and XP), // so we call it only if necessary if (strcmp(s8_LastFile, p_s8File) != 0) { strcpy(s8_LastFile, p_s8File); mciSendString("close all", 0, 0, 0); char s8_Buf[300]; sprintf(s8_Buf, "open \"%s\" type sequencer alias MIDIDemo", p_s8File); if (u32_Err=mciSendString(s8_Buf, 0, 0, 0)) return u32_Err; } if (u32_Err=mciSendString("play MIDIDemo from 0", 0, 0, 0)) { // replace stupid error messages if (u32_Err == 2) u32_Err = MCIERR_SEQ_NOMIDIPRESENT; return u32_Err; } return 0; } return ERR_INVALID_FILETYPE; } /************************************************** Stops the currently playing Wav and MIDI ***************************************************/ void cSound::StopSoundFile() { PlaySound(0, 0, SND_PURGE); // Stop Wav mciSendString("stop MIDIDemo", 0, 0, 0); // Stop MIDI } Known Bugs in Windows NT, 2000, and XP - On Windows NT, 2000, and XP, the first note of a MIDI song is omitted from being played if the song does not begin with a rest of at least a quarter note's duration!! (The MIDI sample-files added to the download all begin with a rest.) - On Windows NT, 2000, and XP the mciSendString("open...") command is extremely slow. -"!! Another MIDI Player Using the MMSystem Interface In the MSDN, you find a sample code that also plays MIDI files without DirectX (search for " MIDIPlyr"). It uses the midiOutxxx() and midiStreamxxx() Interface of WinMM.Dll. (This interface is also used by WinAmp.) The bugs in Windows 2000 and XP described above do not affect MIDIPlyr. It works perfectly on all platforms. But, this sample code is EXTREMELY complex (and written in pure C). MIDI Class Using DirectX 8 If you still want a DirectX MIDI player that also supports 3D-sound, special effects, and so forth, download cMIDIMusic (C++) from. (It requires DirectX 8.) DownloadsDownload demo project - 22 Kb Download source - 20 Kb There are no comments yet. Be the first to comment!
http://www.codeguru.com/cpp/g-m/multimedia/audio/article.php/c4715/Creating-an-MCI-MIDI-Class.htm
CC-MAIN-2015-48
refinedweb
1,112
56.15
We are pleased to release Python-Markdown 2.3 which adds one new extension, removes a few old (obsolete) extensions, and now runs on both Python 2 and Python 3 without running the 2to3 conversion tool. See the list of changes below for details. Python-Markdown supports Python versions 2.6, 2.7, 3.1, 3.2, and 3.3. Support has been dropped for Python 2.5. No guarantees are made that the library will work in any version of Python lower than 2.6. As all supported Python versions include the ElementTree library, Python-Markdown will no longer try to import a third-party installation of ElementTree. All classes are now “new-style” classes. In other words, all classes subclass from ‘object’. While this is not likely to affect most users, extension authors may need to make a few minor adjustments to their code. “safe_mode” has been further restricted. Markdown formatted links must be of a known white-listed scheme when in “safe_mode” or the URL is discarded. The white-listed schemes are: ‘HTTP’, ‘HTTPS’, ‘FTP’, ‘FTPS’, ‘MAILTO’, and ‘news’. Schemeless URLs are also permitted, but are checked in other ways - as they have been for some time. The ids assigned to footnotes now contain a dash ( -) rather than a colon ( :) when output_format it set to "html5" or "xhtml5". If you are making reference to those ids in your JavaScript or CSS and using the HTML5 output, you will need to update your code accordingly. No changes are necessary if you are outputting XHTML (the default) or HTML4. The force_linenos configuration setting of the CodeHilite extension has been marked as Pending Deprecation and a new setting linenums has been added to replace it. See documentation for the CodeHilite Extension for an explanation of the new linenums setting. The new setting will honor the old force_linenos if it is set, but it will raise a PendingDeprecationWarning and will likely be removed in a future version of Python-Markdown. The “RSS” extension has been removed and no longer ships with Python-Markdown. If you would like to continue using the extension (not recommended), it is archived on GitHub. The “HTML Tidy” Extension has been removed and no longer ships with Python-Markdown. If you would like to continue using the extension (not recommended), it is archived on GitHub. Note that the underlying library, uTidylib, is not Python 3 compatible. Instead, it is recommended that the newer PyTidyLib (version 0.2.2+ for Python 3 comparability - install from GitHub not PyPI) be used. As the API for that library is rather simple, it is recommended that the output of Markdown be wrapped in a call to PyTidyLib rather than using an extension (for example: tidylib.tidy_fragment(markdown.markdown(source), options={...})). The entire code base now universally runs in Python 2 and Python 3 without any need for running the 2to3 conversion tool. This not only simplifies testing, but by using Unicode_literals, results in more consistent behavior across Python versions. Additionally, the relative imports (made possible in Python 2 via absolute_import) allows the entire library to more easily be embedded in a sub-directory of another project. The various files within the library will still import each other properly even though ‘markdown’ may not be in Python’s root namespace. The Admonition Extension has been added, which implements rST-style admonitions in the Markdown syntax. However, be warned that this extension is experimental and the syntax and behavior is still subject to change. Please try it out and report bugs and/or improvements. Various bug fixes have been made. See the commit log for a complete history of the changes.
http://pythonhosted.org/Markdown/release-2.3.html
CC-MAIN-2017-39
refinedweb
608
64.41
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Creating Modules4:57 with Jason Seifer Learn how to create simple modules as well as augment existing objects with the module we just created. - 0:00 [??] [Treehouse] - 0:08 In this badge we're going to be talking about modules in Ruby. - 0:12 Modules are a construct that serve as a container. - 0:15 The container can be used for namespacing, - 0:18 including different data, or augmenting your existing classes with different behavior. - 0:23 In this first video we're going to see how to create and use our very first module. - 0:30 Now let's take a look at how to create a very simple module. - 0:34 In order to do this, I'm going to create a directory to hold these module examples. - 0:43 In the first one, let's see how to create a simple module. - 0:55 In order to create a module, you type out the module keyword - 0:59 and then the name of the module that you want. - 1:02 In this case, I'm going to create a module called Treehouse. - 1:10 Naming of modules is just like naming of classes, - 1:14 and you use title case. - 1:16 Now let's go ahead and launch IRB and load this file into IRB and see what we get. - 1:31 Let's take a look at the Treehouse module. - 1:36 Let's go ahead and inspect it. - 1:39 We can check out the class of the Treehouse module we just created. - 1:42 And when we do that, Ruby tells us that it's a module. - 1:47 This isn't very useful. - 1:49 But something that you will use modules for is namespacing. - 1:53 We can go ahead and put our mascot inside of the Treehouse module. - 1:58 Let's see how that looks. - 2:06 What I've done here is created a constant inside of the Treehouse module. - 2:11 Constants are created using all uppercase, and I've set the constant, MASCOT, - 2:16 to have the value "Mike The Frog". - 2:20 Now let's go back to our IRB session and try and use it. - 2:26 We'll have to reload this, so type load and the name of the file once again. - 2:33 Now we can type Treehouse and then 2 colons and MASCOT. - 2:41 That will give us our "Mike The Frog" constant. - 2:45 The 2 colons tell the Ruby interpreter that you want to go inside of the module - 2:50 to get something. - 2:54 Our MASCOT constant is inside the module, so we can't access it outside of the module. - 3:00 Let's see what happens if we try to. - 3:06 And Ruby yells at us, saying that there is an uninitialized constant. - 3:11 In the same way that you need to use title casing to create modules, - 3:14 you can't use lowercasing or underscores to create a module. - 3:18 Let's go ahead and see what happens if we change the casing. - 3:26 We can see that Ruby yells at us once again, - 3:29 saying that it must be a constant. - 3:32 Instead of just putting constants inside of modules, - 3:36 we can also put classes inside of modules. - 3:40 This is something called namespacing, - 3:42 and it's very useful for separating different parts of your program. - 3:46 So let's create a class inside of our Treehouse module. - 3:51 We'll create a frog class and give it an attribute of name. - 3:59 Now let's go ahead and reload the file. - 4:03 Just like before when we had to use 2 colons to get at the constant, - 4:07 we have to do the same thing to get at the class that's inside of our module. - 4:14 Let's create a new variable called mike. - 4:19 He's going to be a new frog. - 4:31 And his name is "Mike The Frog". - 4:33 Let's see what this variable looks like. - 4:39 We can see that the class is namespaced when we do the inspect. - 4:46 Now that we've learned how to create a module, - 4:48 in our next video we're going to learn how to use include and extend in order to use it. - 4:55 [??] [Treehouse]
https://teamtreehouse.com/library/ruby-foundations/modules/creating-modules
CC-MAIN-2016-44
refinedweb
776
82.04
I can't answer even half the questions I'm being asked in my email inbox. Sorry. So I'm moving the conversation here. The only thing I know is that Xam hope to demo something about PCLs sometime around the time of Evolve - so we're really close now to having amazing Cross-Platform support Until then, the safest route is probably to use the old Mono tools, VSMonoTouch and the patched version of 3.1.1 - - that's what I'm using right now today. Alternatively, if you find something that works, please post it here And if you have questions then again ask them here - hopefully someone here will be able to help you out. We will get there - the future is definitely portable Thanks (sorry I can't help more!) Stuart Full PCL integration pleeeeeeeeeezzzzz! There's plenty that I can work on between now and Evolve, since I'm still in learning and demo mode. But before long I'll need to integrate with MvvmCross and ServiceStack across iOS, Android, Win 8, and Win phone 8. BTW, the new Xamarin 2.0 looks excellent so far, other than the PCL status. @StuartLodge Thanks for this thread. I didn't try the new tools in order to avoid headaches especially with PCLs. @Xamarin I have a working cross platform solution using a PCL project and specific GUI projects for each platform. The PCL project references some MvvmCross PCL binaries. For the iOS project I have to compile the PCL against MonoTouch and need to reference the MvvmCross binaries that have been built against MonoTouch too. In order to keep my solution folder portable I'm using conditional compiling inside the .csproj file of my PCL to reference the correct binaries on each platform: (An example solution can be downloaded here:) Now I'm asking myself: Does this setup work with the new Xamarin.iOS plugin for Visual Studio? Does the PCL project reference the correct MvvmCross binaries on each platform? And do I still need the TargetFrameworkProfile hack inside the PCL project: I'm afraid to break my setup and spend a lot of hours in testing Xamarin 2.0 and PCL support. If you say my solution doesn't work with the new tooling then it's not a problem. I can stay on the old tools for a while until PCLs are completely supported. I was able to open the MvvmCross_NoSamples solution after installing Xamarin 2.0. I had to remove the references to to PortableSupport/Touch/System.Windows.Touch project from all of the iOS projects because of the following build error: The type 'System.Windows.Input.ICommand' exists in both 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\MonoTouch\v4.0\System.dll' and '..\MvvmCross\bin\Touch\Debug\System.Windows.dll' I did get a slew of warnings about references, but everything still built. I was able to add the iOS project to @slodge 's Sphero solution. I had to hack another target framework file similar to what he suggested at:. I added one to target the official MonoTouch framework that Xamarin 2.0 installs. Add this file: to C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETPortable\v4.0\Profile\Profile104\SupportedFrameworks I am all new to this so I have no idea if this runs but it compiles! EDIT: It compiled before I wrote this. When I got back, I had errors. :-/ Good stuff, Seth What are the errors? Together we will get this stuff working... thanks for doing all the work so far! These are my errors with your Sphero project. I find some of them weird since ICommand and DefaultImagePath are referenced in the assemblies. I have spent a bit of time getting mvvmcross to the stage of compiling with the Xamarin 2 VS plugin however when it comes to running say Tutorial.UI.Cross it fails without an error message, ie just a red cross in the error panel. I disabled the wpf projects as this is currently out of my scope and it did reduce some of the errors. Would certainly be interested to know if you manage to get the sphero project to run. I am slowly starting to switch over... Just got some existing customer work that can't switch and I don't have a budget for multiple development Macs. One of the biggest issues seems to be the strong naming needed now - that stops the existing pre-built assemblies working. @valdetero were those build errors with the existing pre-built assemblies (built pre-signing) or with new ones? @aheydler there's not much debug info there, is there? If you can't get any other info, then can you post a screenshot and maybe some of the build output. I'll then see if we can attract a Xamarin dev here to help debug that. @StuartLodge believe me if there was more info I would have posted it! there is literally the red cross icon and a timestamp. If someone can tell me where any logs might be I will be happy to have a look as I am not yet that familiar with VS. Will post a screen shot and the build output when I can. Those were with the new assemblies. I had a lot more with the existing ones. It seems like compilation errors appear because PCLs compile against "wrong" .NET Framework. Here's why I think so - if I create a class that implements ICommand in my MonoTouch project, it compiles and runs just fine. But if I move the same code into PCL and try to consume this class in my MonoTouch project, I get the following: error CS0012: The type 'System.Windows.Input.ICommand' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Windows, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e, Retargetable=Yes'. I did go all over MvvmCross and remove references to System.Windows. And I am getting this error in Visual Studio on Windows machine when trying to compile the solution with Xamarin.iOS (Xamarin 2.0). this is the build log from a complete build of mvvmcross all with the .NetPortable files as described above along with what appears in VS when attempting to run Tutorial.UI.Touch We've not managed to lure anyone official on to here yet. So I think the best thing to do is to start opening Bugzilla entries on these things. So please start logging them and then link back here too. @Xamarin's response to PCL bug reports is really clear - "All our PCL support is really unsupported" - so that might just ignore these Bugzilla reports, but I would expect Xam to want to urgently fix things like an empty error message - it'll be cheaper to make the error message than to answer support calls about it: But at the same time as saying PCLs aren't yet supported, I think everyone's now genuinely really keen to make this all work - e.g. on Mac we've been given this hack - - love it Sorry - this is so painful. It is a transitional process - the future is portable Stuart, what branch is v2 (latest stable I presume) - is it vNext? Found - yes. Mentioned somewhere in the vNext branch description on GitHub. @MihaMarkic Yes, currently: There are some binaries available on SkyDrive - see the binaries link on the right of The code is stable on Xamarin 1.0 . For XamDroid it also seems good on Xamarin 2.0. For Xamios ... some people seem to be running OK, others not so OK. For all platforms, the special MonoDevelop/XamStudio builds from @jstedfast are brilliant - help a lot with easing setup Getting the Mvx source and binary process into a cleaner state is high in the priority list for v3 - but it's a shifting target with platform changes, PCL changes, await/async, etc - so I do expect to still keep working at it I have customers who want me to keep the VSMonoTouch code alive because of where they are in their dev (and budget) cycles. Supporting 5 different target operating systems and multiple Xam and MS product releases simultaneously makes me feel a bit dizzy even with the help of PCLs Stuart I somehow don't want to mess with PCL at this time (having Xamarin Studio installed). I am considering an automated process that would copy/modify csproj files to have Android projects instead (I am not interested in iOS, just Android and Windows) - for the time being. Shame that Xamarin doesn't support PCLs officially yet... Sounds like an interesting plan Probably something I should have done last Summer - but I opted for the 'Stampy' route instead - opted to keep bashing my head against and eventually through the wall instead. For Android only, I think PCLs pretty much just work in VS and in XamStudio 2.0 on the Mac (with @jstedfast patch applied) "For Android only, I think PCLs pretty much just work in VS I'll see, but first I have to get my activation back :-S still waiting for activation fix .. Indeed it opens and compiles (v3 is still a bit chaotic solution, I need to fix it a bit). It looks like just referencing a PCL assembly from within VS is a problem. I mean if I manually edit csproj it works just fine. That said I am almost compiling the Tutorial.Droid - just a problem with newtonsoft.json remaining. Stuart, does debugger stops on breakpoints within PCL sources? Mine doesn't, it stops for Android sources though. Please upvote Dumb question - besides adding myself to CC list, is there any other way to make it as "bloody" important? There are people you can lobby... both on the inside and outside of Xamarin. I hate to say it, but so far making noise seems to be more effective than not... Meanwhile, if anyone's interested then I've got the v3 code building on VS and XS... so next up is QA and documentation... more details on: Will spend a lot of time on this next week! V3 binaries are now being pushed to I'm working on docs on the v3 pages on - this list will hopefully grow a lot v soon. Thanks @StuartLodge! I'm a bit confused about the actual situation. The v3 binaries are still alpha, aren't they? Should vNext users upgrade to v3 or is it better to wait a few weeks/months? After upgrading to v3 existing solutions may not work anymore due to a new ViewModel lifecycle . Can I make vNext solutions work with v3 when I use the old ViewModelLocator and register it as described on StackOverflow? You're talking about PCL issues using Xam.iOS inside VS2012. So is it possible to build PCLs against Xam.iOS from inside VS2012? HotTuna is brand new v3 is in Alpha - which means you can use it but you may get hurt. v3 has significant changes from vNext. Porting code from vNext to v3 will be mostly trivial, but the number of assembly, class and pattern changes mean that you should not attempt porting without some time to spare and without a swear box. I think v3 is v v v awesome, but I've spent a lot of time porting code to it - and I do now have Tourettes. No Xamarin products contain any official support for PCLs right now - there are hacks to make them work - see this thread, beg for info from Xamarin, or join me sitting in a dark corner holding my knees, rocking backwards and forwards. Official PCL support was allegedly integrated into Xamarin products last week according to the Hanselminutes podcast... but I get no inside tips on when that means there will be public versions available. If in doubt, assume I'll find out the same time as you, if not afterwards. Developers should think for themselves. Your App is king. Long live the king. Thanks, I think I will stay on the old tools and vNext for a while. I'm planning on shifting v3 to Beta on Wednesday this week. After it's shifted to Beta then it will be a lot harder to change patterns, APIs, namespaces, etc - I'll be self-enforcing some 'stability' So if anyone has any HotTuna requests... then get them in today or tomorrow Ok, I am confused. Just to clarify... The Xamarin download link has clear advertisements that say: from Is it actually possible to do iOS/Android cross-platform development with Xamarin out of the box, or are there hacks and other adjustments required? I've been reading through MvvmCross for the last hour or so, sorting through all of the fragmentation and disorganization, and it's difficult to grasp a status quo. I don't want to pay $299 for their tool if it requires hacks and isn't really supported! I also don't exactly understand if I even need Xamarin, or if I can just use my current developer environment (Visual Studio 2010) for Android/iPhone cross-dev using MonoCross. To that end, does vNext/v3 support iPhone+Android integration out of the box, without having access to a Mac? I thought I read in another thread someone referencing needing a Mac to support iOS/iPhone cross-development. By the way Stuart, let me give an enormous KUDOS to you for your excellent support habits and efforts. It is rare to find someone demonstrating so much accountability for their open source projects. You are a true idol for the open source movement, or I suppose just for online communication habits in general! Thanks Jordan Yes But if you had asked: No For PCLs, you do still need some small 'hacks' but these are fairly small now. Within the next month I hope you won't need them - listen to Miguel speaking recently on Hanselminutes podcast for the reason behind that hope. You can develop without PCLs and the path that way isn't that painful. Doing any iPhone development without access to a Mac is not worthwhile IMHO - Apple make it hard - you are welcome to try, but honestly you will burn more money in time than you can save on a $499 Mac Mini (which later will have a resales value of $399) For general questions on do you need Xamarin... yes, you need their license - which means: Don't bother starting on the free version - it's too small to do anything - so start with a 1 month trial instead. Also relies on having a Mac and Xamarin Honestly, if you want advice about how to get started, I'm the wrong person to give advice - others here have gone through the pain&pleasure much more recently than me. However, if you are a c# coder, I think you'll find you get going very quickly. If you read through you'll get the idea for MvvmCross. I am working on more docs and getting started soon - but I am holding off from documenting some of the current hacks - especially as they will change any day now... Stuart
http://forums.xamarin.com/discussion/1549/pcls-and-mvvmcross-in-the-new-tools
CC-MAIN-2015-11
refinedweb
2,537
71.44
I’ve recently gone down the road of testing all my code using Mocha and Chai, and I aim for 100% code coverage. My current library does a HTTP connection to a backend and I’m hoping to use node-fetch for that. But how do you test a piece of asynchronous code that uses promises or callbacks? Let’s take a look at my code under test: import fetchImpl from 'node-fetch'; export default class Client { constructor(baseUrl, options = {}) { const defaultOptions = { fetch: fetchImpl } this.prvOptions = Object.assign({}, defaultOptions, options); this.prvBaseUrl = baseUrl; } fetch(relativeUrl, options = {}) { const defaultOptions = { method: 'GET' } let fetchOptions = Object.assign({}, defaultOptions, options); return this.prvOptions.fetch(`${baseUrl}${relativeUrl}`, fetchOptions); } } This is a much shortened version of my code, but the basics are there. Here is the important thing – I set a default option that includes an option for holding the fetch implementation. It’s set to the “real” version by default and you can see that in line 6. If I don’t override the implementation, I get the node-fetch version. Later on, I call client.fetch('/foo'). The client library uses my provided implementation of fetch or the default one if I didn’t specify. All this logic allows me to substitute (or mock) the fetch command. I don’t really want to test the functionality of fetch – I just want to ensure I am calling it with the right parameters. Now for the tests. My first problem is that I have asynchronous code here. fetch returns a Promise. Promises are asynchronous. That means I can’t just write tests like I was doing before – they will fail because the response wouldn’t be available during the test. The mocha library helps by providing a done call back. The general pattern is this: describe('#fetch', function() { it('constructs the URL properly', function(done) { client.fetch('/foo').then((response) => { expect(response.url).to.equal(''); done(); }) .catch((err) => { done(err); }); }); }); You might remember the .then/.catch pattern from the standard Promise documentation. Mocha provides a callback (generally called done). You call the callback when you are finished. If you encountered an error, you call the callback with the error. Mocha uses this to deal with async tests. Note that I have to handle both the .then() and the .catch() clause. Don’t expect Mocha to call done for you. Ensure all code paths in your test actually call done appropriately. This still has me calling client.fetch without an override. I don’t want to do that. I’ve got this ability to swap out the implemenetation. I have a mockfetch.js file that looks like this: export default function mockfetch(url, init) { return new Promise((resolve, reject) => { resolve({url: url, init: init}); }); } The only thing that the mockfetch method does is create a promise that is resolved and returns the parameters that were passed in the resolution. Now I can finish my test: describe('#fetch', function() { let clientUrl = ''; let clientOptions = {fetch: mockfetch}; let client = new AzureMobileClient(clientUrl, clientOptions); it('constructs the URL properly', function(done) { client.fetch('/foo') .then((response) => { expect(response.url).to.equal(''); done(); }) .catch((err) => { done(err); }); }); }); Note that my mockfetch does not return anything resembling a real response – it’s not even the same object type or shape. That’s actually ok because it’s designed for what I need it to do – respond appropriately for the function under test. There are three things here: - Construct your libraries so that you can mock any external library calls - Use the Mocha “done” parameter to handle async code - Create mock versions of those external library calls This makes testing async code easy.
https://shellmonger.com/2015/12/18/testing-async-functions-with-mocks-and-mocha-in-javascript/
CC-MAIN-2017-13
refinedweb
609
66.74
pthread_attr_setstacksize(3T) sets the thread stack size. The stacksize attribute defines the size of the stack (in bytes) that the system will allocate. The size should not be less than the system-defined minimum stack size. See "About Stacks"for more information. Prototype: int pthread_attr_setstacksize(pthread_attr_t *tattr, int size); #include <pthread.h> pthread_attr_t tattr; int size; int ret; size = (PTHREAD_STACK_MIN + 0x4000); /* setting a new size */ ret = pthread_attr_setstacksize(&tattr, size);. Returns zero after completing successfully. Any other returned value indicates that an error occurred. If the following condition occurs, the function fails and returns the corresponding value. The value returned is less than the value of PTHREAD_STACK_MIN, or exceeds a system-imposed limit, or tattr is not valid.
http://docs.oracle.com/cd/E19620-01/805-5080/attrib-78533/index.html
CC-MAIN-2015-32
refinedweb
117
51.85
Hi, We use the Dispatch library in our code and always import as described in the Dispatch docs like this: import dispatch._, Defaults._ The plugin used to be happy with this but at some point in the last few weeks it has started marking Defaults as red saying: Cannot resolve symbol Defaults. Of course the code compiles fine. This reproduces with a simple project with that line as the only import in a source file and with Dispatch as the only dependency. I tried to reproduce this without any dependencies using a variety of packages, objects and imports but everything worked fine. So I'm not sure what's different about Dispatch to cause this. Is this a known issue or should I raise a new one in YouTrack? Thanks, Steve. Hi, Looks like a bug. Please report in YouTrack with sample SBT/Gradle/Maven project (so I'll get dependencies automatically). It's hard to say if it's known or not. Best regards, Alexander Podkhalyuzin.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205997749-Dispatch-import-marked-as-red
CC-MAIN-2020-16
refinedweb
169
73.68
UG Session - 25th Feb at Zenith Hall, UST Global, Technopark,Trivandrum Agenda 09:30 - 09:45 Opening / Welcome Note 09:45 - 10:30 Sharepoint tricks by Shoban 10:30 - 11:15 Advanced Windows Debugging by Sujith 11:15 - 11:30 Tea Break 11:30 - 12:00 Windows Phone Programming for beginners By Deepthi 12:00 - 01:00 ASP.NET MVC3 by Shalvin 01:00 - 01:15 Closing Note Venue UST Global 6th Floor,Bhavani,Technopark Trivandrum-695581 Kerala Detrails Tuesday, February 21, 2012 Kerala Microsoft User Group Meeting UG Session - 25th Feb at Zenith Hall, UST Global, Technopark,Trivandrum Tuesday, February 7, 2012 Asp .Net MVC 3 Getting Started Asp .Net MVC is a complete alternative to Asp .Net Web Forms. Asp .Net emphasizes testability, doesn't conceal how the web works and has excellent control over HTML. Asp .Net MVC is based on MVC pattern which dates back to 1978 and the Smalltalk project at XEROX PARC. Asp .Net MVC is built as a series of independent replaceable components and the developers can choose the routing system, the view engine, the controller factory or ORM. Asp .Net MVC 3 is built on .Net 4. It comes with a new view engine called Razor and tighter integration with jQuery. Still Asp .Net Web Forms is the superior technology when it comes to intranet applications. ASP .Net 4 has become more web standard compliant and you can use Razor view engine. Just like Asp .Net MVC, Asp .Net 4 also support excellent URL routing features. Let's start creating an Asp .Net MVC 3 project. The blog Asp .Net MVC 2 with Visual Studio 2010 or Visual Web Developer 2010 I had covered the basics of getting started with MVC 2. So I am selecting Razor as the View engine. If you navigate to Scripts folder in the Solution Explorer of Asp .Net MVC 3 project you can lot more jQuery files that the previous version. Controllers and Action Methods In MVC architecture, incoming requests are handled by controllers. In ASP.NET MVC, controllers are just simple C# classes . Each public method in a controller is known as an action method. You can invoke an Action Method from the Web via some URL to perform an action. The MVC convention is to put controllers in a folder called Controllers, which Visual Studio created for us when it set up the project. The controller’s job to construct some data, and it’s the view’s job to render it as HTML. The data is passed from the controller to the view. One way to pass data from the controller to the view is by using the ViewBag object. This is a member of the Controller base class. ViewBag is a dynamic object to which you can assign arbitrary properties, making those values available in whatever view is subsequently rendered. I have already covered creating controllers in the blog Asp .Net MVC 2 with Visual Studio 2010 or Visual Web Developer 2010 which remains the same in Asp .Net MVC 3. When we return a ViewResult object from an action method, we are instructing MVC to render a view. We create the ViewResult by calling the View method with no parameters. This tells MVC to render the default view for the action. We can return other results from action methods besides strings and ViewResult objects. For instance a RedirectResult can be used to redirected browser to another URL. If we return an HttpUnauthorizedResult, we force the user to log in. These objects are collectively known as action results, and they are all derived from the ActionResult class. The Razor view is quite different from that of Web Forms View Engine. Refer my blog Razor View Engine and Visual Studio for more information on Razor View Engine. @{ ViewBag.Title = "Index"; } <h2>Index</h2> using System.Web.Mvc; namespace MvcApplicationShalvin.Controllers { public class HomeController : Controller { // // GET: /Home/ public ActionResult Index() { ViewBag.Name = "Shalvin P D"; ViewBag.Site = ""; return View(); } } } @{ ViewBag.Title = "Index"; } <h2>Index</h2> @ViewBag.Name <br /> @ViewBag.Site F# 2.0 Console Application with Visual Studio - II This is part of the blog that explore F# 2.0 F# 2.0 with Visual Studio 2010 In the previous blog we explored a Hello world application with F# using the REPL support it is providing. open System Console.Write "Enter your name : " let strName = Console.ReadLine() Console.WriteLine("Hello " + strName) let c = System.Console.ReadLine() let is the single most important keyword you use in F# programming: it’s used to define data, computed values, functions, and procedures. F# is statically typed. F# also supports type inferencing. What you see is a full functioning F# application. Notice the code reduction in comparison to other .Net languages like C# or VB .Net. Let's run the application instead of using F# interactive. Creating and calling functions let square n = n * n let result = square 3 Console.WriteLine(result) Functions are central to Functional programming. Functional programming views all programs as collections of functions that accept arguments and return values. Function with Parameters open System let add(a, b) = let sum = a + b sum Console.WriteLine(add(45, 45)) Indenting denotes the body. There is no curly brackets to denote the body of a function. White spaces are relevant in F#. Creating a Windows Form open System.Windows.Forms let form = Form(Visible = true, Text = "Shalvin") let btn = new Button(Text = "Helo") form.Controls.Add(btn) Here I am setting a reference to System.Windows.Forms and creating a form with a Button. What enthralls me the most in F# is its terseness. Monday, February 6, 2012 F# 2.0 with Visual Studio 2010 F# is a functional programming language initially developed by Don Syme now developed by Microsoft. Though it is inherently functional in nature it supports imperative and Object Oriented programming. It is yet another .Net language ie. it emits IL and an excellent starting point for .Net professionals to explore functional programming. F# supports Read Eval Print Loop (REPL) which makes it easy to learn a language. F# is included in Visual Studio along with C#, VB .Net and Visual C++. I am starting an F# application and creating a conventional Console Application. open System Console.WriteLine "Shalvin" open in F# is equivalent to using statement in C# For using REPL highlight the code segment and press Alt + Enter. You can see the output in the F# Interactive at the bottom of the IDE. Friday, February 3, 2012 Timeline of major programming languages 1957 Fortran 1958 Lisp 1959 COBOL 1968 Smalltalk 1972 C 1972 Prolog 1975 Scheme 1983 C++ 1986 Erlang 1987 Perl 1990 Haskell 1991 Visual Basic 1995 Java 2001 C#, VB .Net 2007 Clojure
https://shalvinpd.blogspot.com/2012/02/
CC-MAIN-2017-30
refinedweb
1,124
68.36
Writing CSS is really simple and straightforward, so why is there a need for principles and best-practices while writing CSS? As the project scope increases and as the number of people working on the project increases, the problems become more and more apparent and can cause serious issues down the line. Fixing issues may become harder, duplicated code, complex override chains and use of !important, leftover / unused code (removed elements or features), code that is hard to read, etc. Writing CSS at a professional level will make the CSS code more maintainable, extensible, understandable and cleaner. We're going to look at the five simple and very effective principles that will take your CSS to the next level. Naming principle "There are only two hard things in Computer Science: cache invalidation and naming things." -- Phil Karlton Properly naming and structuring your CSS selectors is the first step to making your CSS more readable, structured and cleaner. Establishing rules and constraints in your naming convention makes your code standardized, robust and easier to understand. This is why concepts like BEM (Block-Element-Modifier), SMACSS (Scalable and Modular Architecture for CSS) and OOCSS (Object Oriented CSS) are popular among many frontend developers. Low specificity principle Overriding CSS properties is very useful, but things can go out of hand pretty quickly on more complex projects. Overriding chains can get really long and complex, you might be forced to use !important to solve the specificity issue and you could get really easily lost when debugging or adding new features. /* Low-specificity selector */ .card {} /* High-specificity selectors */ .card .title {} .blog-list .card img {} .blog-list .card.featured .title {} #js-blog-list .blog-list .card img {} Browser and specificity One of the benefits of following the low specificity principle is performance. Browsers parse the CSS from right to left. Let's take a look at the following example: .blog-list .card img {} Browsers parse the selector like this: - Find all imgelements on the page - Keep selected elements that are the descendants of .cardclass - Keep selected elements that are the descendant of .blog-listclass You can see how high-specificity selectors impact performance, especially when we need to globally select generic elements like div, img, li, etc. Using the same level of specificity By using low specificity CSS class selectors in combination with BEM or one of the other naming principles mentioned in the previous section, we can create a performant, flexible and understandable code. Why use CSS classes? We want to keep the same level of specificity, stay flexible and be able to target multiple elements. Element selectors and id selectors do not offer the flexibility that we need. Let's rewrite our previous example using BEM and keeping specificity low. /* Low-specificity selector */ .card {} /* Fixed high-specificity selectors */ .card__title {} .blogList__image {} .blogList__title--featured {} .blogList__img--special {} You can see how these selectors are simple, understandable and can be easily overridden and extended if needed. And by keeping them low-level (a single class), we are guaranteed optimal performance and flexibility. DRY Principle DRY (Don't repeat yourself) principle can be also applied to CSS. Duplicated code in CSS can cause code bloat, unnecessary overrides, reduce maintainability, etc. This issue can be fixed by structuring the code appropriately and having high-quality documentation. Storybook is a great free tool that enables you to create an overview of available frontend components and write high-quality documentation. /* Without DRY Princple */ .warningStatus { padding: 0.5rem; font-weight: bold; color: #eba834; } .errorStatus { padding: 0.5rem; font-weight: bold; color: #eb3d34; } .form-errorStatus { padding: 0.5rem 0 0 0; font-weight: bold; color: #eb3d34; } Let's refactor the code so it follows the DRY principle. /* With DRY Principle */ .status { padding: 0.5rem; font-weight: bold; } .status--warning { color: #eba834; } .status--error { color: #eb3d34; } .form__status { padding: 0.5rem 0 0 0; } Single responsibility principle By using the single responsibility principle in our CSS, we can ensure that our CSS classes are easily extended and overriden. Let's take a look at the following example. .button { padding: 1rem 2rem; font-size: 2rem; border-radius: 0.2rem; background-color: #eb4934; color: #fff; font-weight: bold; } .button--secondary { border-radius: 0; font-size: 1rem; background-color: #888; } We can see that if we want to extend .button class with .button--secondary, we are doing lots of overrides to achieve what we need, when we only want to apply a different background color and keep the default styles. The problem is that our .button class is having several roles: - Sets layout ( padding) - Sets typography ( font-size, font-weight) - Sets presentation ( color, background-color, border-radius) this makes our CSS classes very hard to extend and combine with other CSS classes. By keeping this in mind, let's use BEM and OOCSS to improve our CSS. /* Shared styles */ .button { padding: 1rem 2rem; font-weight: bold; color: #fff; } /* Style extensions */ .button--radialBorder { border-radius: 0.2rem; } .button--large { font-size: 2rem; } .button--primary{ background-color: #eb4934; } .button--secondary { background-color: #888; } We have broken down our button styles into several classes that can be used to extend the base button class. We can optionally apply the modifiers and add new ones as the design changes or new elements are being added. Open/Close principle software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification. We've already used the open/close principle in the previous examples. All new features and options need to be added by extension. Let's take a look at this example. .card { padding: 1rem; } .blog-list .card { padding: 0.5em 1rem; } The .blog-list .card selector has few potential issues: - Some styles can be applied only if the .cardelement is a child of .blog-listelement. - Styles are forcibly applied to the .cardelement if placed inside the .blog-listelement, which can produce unexpected results and unecessary overrides. Let's rewrite the previous example: .card { padding: 1rem; } .blogList__card { padding: 0.5em 1rem; } We've fixed the issue by having a single class selector. With this selector, we can avoid unexpected effects and there are no conditional nested styles. Conclusion We've seen how by applying these few simple principles we have significantly improved the way we write CSS: - Standardized naming and structure, and improved readability by using BEM, OCSS, etc. - Improved performance and structure by using low-specificity selectors. - Reduced code bloat and improved code quality with DRY principle - Flexibility and maintainability by using open/close principle - etc._1<< Adrian Bece React, Frontend, Magento 2 certified developer. Magento PWA Studio contributor. Rock and metal music fan. Reads Dune, sci-fi novels and Calvin & Hobbes. Creates amazing interfaces @ prototyp.digital Discussion Adrian I have another principle that conflicts with some of your own, but I feel would solve some of the problems you're sharing. Don't Unset Yourself. That is – do not contradict a previous style that you have applied. Your example: Would be much better served as: This way you do not have to worry about the order of properties to ensure that your classes work as intended. You only ever add styles, instead of setting and then unsetting them. Nice overview - It's good to know the actual names of these tips (like "Low specificity"); that helps with remembering it :) Also, didn't realize that browsers search for selectors from right to left! That's interesting... I wonder how much overhead it saves once you realize that - do you know of any studies / tests that show that? Thank you. Glad you found the article useful. Regarding the performance studies, I am not aware of any, but if you come across any, please link them. "Writing CSS is really simple and straightforward" = BS. CSS is getting better but it is still a mess. Brower compatibility probs, compiler-generated CSS, resets, browser pre-fixes, box type, and old legacy days from when they let print designer sit at the table to develop specs... Oh ya and silly issues reading selector from right to left when we read it top to bottom and the browser typically reads everything else from start to finish . . . etc etc. CSS is almost as much of a mess as JavaScript. This article deals with the basics of how we write CSS (selectors and properties concept) and how to write flexible and scalable CSS, not browser compatibility issues, CSS compilers, vendor prefixes, etc. I wasn't saying anything like that. I'm just saying that CSS is not simple and straightforward anymore. I still enjoyed your article though and look forward to seeing another. Thank you for clarifying and I'm glad you've enjoyed the article. CSS indeed had a messy history and the syntax suffered for it, but it keeps improving year after year. Thank you for the very detailed and insightful answer. I guessed as much regarding the performance and the worst-case scenario. In any case, I think that having a single selector is the best way to go in terms of performance, code readability and flexibility. In any case, having several levels of CSS selectors brings up some other issues, as described in "Open/Closed principle". I teach webdesign basics and this was incredibly helpful and gave me some ideas on how to approach things with my students. Specificity often is something that leave them scratching their head. Thanks! Thank you, glad you found it helpful. I wish my teachers would show me these best practices from the start. I use BEM at work but I prefer ECSS at home. ecss.io/ Nice. Thank you for sharing. I might use it as well on my projects There is definitely some good takeaways, I highly recommend it. Everytime i see BEM I can't stop thinking how ugly, inconvenient and repetitive it is. That is very true for BEM, but generally it's not a problem if you don't nest classes more than 2-3 levels. I sometimes combine BEM with the "old ways" in order to not have deep nesting of BEM on complex elements. Works fine, still namespaced (but you need to have control of the whole project and be reasonable enough not to write a global ".specificelement" class. .block__element--modifier .specificelement {} Very helpful. Thanks! Thank you. Glad you've found it helpful. Excellent article, thank you very much. Thank you Great one ! Clean and simple explanation. Thank you
https://dev.to/prototyp/improve-your-css-with-these-5-principles-35jd
CC-MAIN-2020-40
refinedweb
1,727
58.48
OpenCV #005 Averaging and Gaussian filter Digital Image Processing using OpenCV (Python & C++) Highlights: In this post, we will learn how to apply and use an Averaging and a Gaussian filter. We will also explain the main differences between these filters and how they affect the output image. What does make a good filter? This is a million dollar question. Tutorial Overview: 1. Averaging filter What does make a good filter? So, let’s start with a box filter. In the Figure below, we see the input image (designed from numbers) that is processed with an averaging filter. We also say box/uniform/blur, and yes, these are all synonyms :-). Here, all the coefficient values have the same weight. That is, the averaging filter is a box filter of all coefficients having the value \(\frac{1}{9} \). \(F (x, y) \) and the result \(G (x, y) \) is what we got Now, as the filter \(H (u, v) \) is being moved around the image \(F (x, y) \), the new image \(G (x, y) \) on the right is generated. Next, let’s say that we process an image below with the averaging box filter. What should we expect as a result? Well, we will get an ugly image like the one on the right. An image on the left is visually applealing and the image is quite smooth. However, the generated image looks nothing like it. There are some unnatural sharp edges at the output \(G (x, y) \). What was problematic with that? The square is not smooth! Trying to blur or filter an image with a box that is not smooth does not seem right. When we want to smooth an image our goal is to catch the significant pieces of the information (lower frequency content). Subsequently, we will see that a better result will be obtained with a Gaussian filter due to its smoothing transitioning properties. Code for Averaging filter Python Both in Python and C++ averaging filter can be applied by using blur() or boxFilter() functions. C++ #include <iostream> #include <opencv2/opencv.hpp> using namespace std; using namespace cv; int main() { cv::Mat image = imread("car.jpg", IMREAD_GRAYSCALE); cv::Mat processed_image; // we create a simple blur filter or an average/mean filter // all coefficients of this filter are the same // and this filter is also normalized. cv::imshow("Original image", image); cv::blur(image, processed_image, Size(3,3) ); cv::waitKey(); cv::imshow("Blur filter applied of size 3", processed_image); cv::waitKey(); cv::blur(image, processed_image, Size(7,7)); cv::waitKey(); cv::imshow("Blur filter applied of size 7", processed_image); cv::waitKey(); // Here we create an image of all zeros. // Only one pixel will be 1. // In this example we will generate a very small image so that we can // better visualize the filtering effect with such an image. cv::Mat image_impulse = cv::Mat::zeros(31, 31, CV_8UC1); image_impulse.at<uchar>(16,16) = 255; image_impulse = image_impulse * 20; cv::imshow("Impulse image", image_impulse); cv::waitKey(); cv::Mat image_impulse_processed; cv::blur(image_impulse, image_impulse_processed, Size(3,3)); image_impulse_processed = image_impulse_processed * 20; cv::imshow("Impulse image", image_impulse_processed); cv::waitKey(); // this will produce a small square of size 3x3 in the center // Notice that, since the filter is normalized, // if we increase the size of the filter, // the intensity values of the square in the output image will be more lower. \ // Hence, more challenging to be detected. cv::blur(image_impulse, image_impulse_processed, Size(7,7)); image_impulse_processed = image_impulse_processed * 20; cv::imshow("Impulse image", image_impulse_processed); cv::waitKey(); Let’s see the results of our code: Interestingly, when we do filtering, the larger the kernel size, the smoother the new image would be. Here below is a sample of filtering an impulse image (to the left), using a kernel size of 3×3 (in the middle) and 7×7 kernel size (to the right). 2. Gaussian filter So, we all know what a Gaussian function is. But how will we generate a Gaussian filter from it? Well, the idea is that we will simply sample a 2D Gaussian function. We can see below how the proposed filter of a size 3×3 looks like. Using the \(3\times 3 \) filters is not necessarily an optimal choice. Although we can notice its higher values in the middle that falls off at the edges and even more at the corners, this can be considered as a poor representation of the Gaussian function. Here, we plot a Gaussian function both in 2D & 3D so we gain more intuition how larger Gaussian filters will look like: \(h \left ( u,v \right )= \frac{1}{2\pi \sigma ^{2}} e^{-\frac{u^{2}+v^{2}}{\sigma ^{^{2}}}} \) Here, we can refresh our knowledge and write the exact formula of Gaussian function: \(\exp (-\frac{ (x^{2}+y^{2}) }{2\sigma ^{2}}) \) Next, if we take an image and a filter it with a Gaussian blurring function of size 7×7 we would get the following output. Wow! So, much nicer. Compare smoothing with a Gaussian to the non-Gaussian filter to see the difference. In the non-Gaussian, we see all those sharp edges. When compared with the Gaussian, we get a nice smooth, blur on the new image. Code for Gaussian filter Python C++ // Gaussian filter // First we will just apply a Gaussian filter on the image // this will also create a blurring or smoothing effect.. // Try visually to notice the difference as compared with the mean/box/blur filter. cv::Mat image_gaussian_processed; cv::GaussianBlur(image, image_gaussian_processed, Size(3,3), 1); cv::imshow("Gaussian processed", image_gaussian_processed); cv::waitKey(); cv::GaussianBlur(image, image_gaussian_processed, Size(7,7), 1); cv::imshow("Gaussian processed", image_gaussian_processed); cv::waitKey(); Output Smoothing with a 7×7 Gaussian filter on the right Python C++ cv::Mat image_impulse_gaussian_processed; cv::GaussianBlur(image_impulse, image_impulse_gaussian_processed, Size(3,3), 1); image_impulse_gaussian_processed = image_impulse_gaussian_processed * 20; cv::imshow("Gaussian processed - impulse image", image_impulse_gaussian_processed); cv::waitKey(); cv::GaussianBlur(image_impulse, image_impulse_gaussian_processed, Size(9,9), 1); // here we have just multiplied an image to obtain a better visualization // as the pixel values will be too dark. image_impulse_gaussian_processed = image_impulse_gaussian_processed * 20; cv::imshow("Gaussian processed - impulse image", image_impulse_gaussian_processed); cv::waitKey(); Output Python C++ // here we will just add random Gaussian noise to our original image cv::Mat noise_Gaussian = cv::Mat::zeros(image.rows, image.cols, CV_8UC1); // here a value of 64 is specified for a noise mean // and 32 is specified for the standard deviation cv::randn(noise_Gaussian, 64, 32); cv::Mat noisy_image, noisy_image1; noisy_image = image + noise_Gaussian; cv::imshow("Gaussian noise added - severe", noisy_image); cv::waitKey(); //adding a very mild noise cv::randn(noise_Gaussian, 64, 8); noisy_image1 = image + noise_Gaussian; cv::imshow("Gaussian noise added - mild", noisy_image1); cv::waitKey(); Output And a medium Gaussian noise was added to the right image Python C++ // Let's now apply a Gaussian filter to this. // This may be confusing for beginners. // We have one Gaussian distribution to create a noise // and other Gaussian function to create a filter, sometimes also called a kernel. // They should be treated completely independently. cv::Mat filtered_image; cv::GaussianBlur(noisy_image, filtered_image, Size(3,3), 3); cv::imshow("Gaussian noise severe - filtered", filtered_image); cv::waitKey(); cv::GaussianBlur(noisy_image1, filtered_image, Size(7,7), 3); cv::imshow("Gaussian noise mild - filtered", filtered_image); cv::waitKey(); return 0; } Output Summary Finally, we have learned how to smooth (blur) an image with a Gaussian and non-Gaussian filter. We realize why it is preferable to use a Gaussian filter over a non-Gaussian one. In the next posts, we will talk more about Sobel operator, image gradient and how edges can be detected in images. More resources on the topic: - What is a Gaussian and Why is it Important in Data Analysis - Pixel Intensity Changes and Adding Watermarks - Gaussian Filter from Scratch inPython - Common Type of Noise
http://datahacker.rs/opencv-average-and-gaussian-filter/
CC-MAIN-2021-39
refinedweb
1,285
51.58
David Carlisle wrote: >>are you sure of that ? > > yes:-) I had no doubt on that :D > > > If I run saxon on the example document that you posted, I get: well neither Xalan nor xsltproc complains :( Same results for both... (even with the permutation) > $ saxon prio.xml prio.xsl > Recoverable error > Ambiguous rule match for /doc[1]/chapter[1]/para[1] ... > the rules are _very_ simple. > > If it is a sigle name, such as "para" then it is priority 0 > if it is a child or attribute axis followed by a single name > child::para, attribute::para, @para it would also be 0 > > If its a namespace wildcard such as x:* it's -0.25 > > If its a node test such as comment() it's 0.25 > > otherwise it is 0.5 > > so: > > //para has priority 0.5 > chapter/para has priority 0.5 > para has priority 0 If you set the verbose mode to xsltproc, you have $ xsltproc -v s05b.xsl xml05.xml Added namespace: xsl mapped to xsltPrecomputeStylesheet: removing ignorable blank node xsltParseStylesheetProcess : found stylesheet xsltCompilePattern : parsing '//para' xsltCompilePattern : parsed //para, default priority 0.000000 added pattern : '//para' priority 0.000000 xsltCompilePattern : parsing 'chapter/para' xsltCompilePattern : parsed chapter/para, default priority 0.500000 added pattern : 'chapter/para' priority 0.500000 parsed 2 templates added pattern : '//para' priority 0.000000 added pattern : 'chapter/para' priority 0.500000 hummm... Then this is a bug ? cheers Fred -- XPath free testing software : Frédéric Laurent XSL-List info and archive:
http://www.oxygenxml.com/archives/xsl-list/200401/msg00236.html
CC-MAIN-2019-13
refinedweb
246
59.9
table of contents - stretch 4.10-2 - testing 4.16-2 - stretch-backports 4.16-1~bpo9+1 - unstable 4.16-2 NAME¶strverscmp - compare two version strings SYNOPSIS¶ #define _GNU_SOURCE /* See feature_test_macros(7) */ #include <string.h> int strverscmp(const char *s1, const char *s2); DESCRIPTION¶Often¶The strverscmp() function returns an integer less than, equal to, or greater than zero if s1 is found, respectively, to be earlier than, equal to, or later than s2. ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶This function is a GNU extension. EXAMPLE¶The program below can be used to demonstrate the behavior of strverscmp(). It uses strverscmp() to compare the two strings given as its command-line arguments. An example of its use is the following: $ i./a.out jan1 jan10 jan1 < jan10 Program source¶ ]); exit(EXIT_SUCCESS); }
https://manpages.debian.org/stretch/manpages-dev/strverscmp.3.en.html
CC-MAIN-2019-30
refinedweb
144
52.97
Introduction One of the most important aspects of all web applications is the Application Programming Interface (API), since it is the glue that allows the ends of a given communication channel to know exactly what to do. Because it is important for APIs to be robust, scalable and reliable, a lot of manual effort goes into maintaining static APIs. In fact, many tech companies set aside full-time roles just for designing and maintaining the APIs. There is only one problem that we clearly missed all these years: APIs were never supposed to be static. It can be argued that a given web app is only as good as the data it is able to access and display. While we are fortunate to live in a world full of data sources, we only end up using the data sources we have access to (so, mathematically, probably works out to a very small percent of the world's data). Usually, each data source has its own unique API requirements, and this makes it a total drag whenever a new data source is to be used. Usually, it requires sufficient time allocation to read lengthy API docs, iterate over code that is only as robust as the API, and takes the developer away from other tasks on the backlog. This time and development cost can be incurred with every new incorporation of a data provider. Even if an app only has to concentrate on a single source, such as it's own backend, existing API models can still make iterating unnecessarily time consuming. And I would argue, a web app that relies on only one data source can quickly become a very boring app, since more often than not, its users will require constant engagement and different kinds of stimuli. Let's analyze what I perceive to be the most commonly used API model: (simplified greatly) In this model, this is how I view it: - The server owns the API, the client-side developer has to keep up to date with lengthy API docs - The client makes requests, the server responds - The client is expecting a single response, so if there is something that happens in the time that the server performs the requested service, it will not be communicated back to the client. No notifications in this model, just a response. - The communication is uni-directional; requests go one way, responses go the other. - When the server's API changes, all clients are blocked from communicating with the server until they update their request methods, unless the server provides access to previous versions. This is a terrible model because it's not reliable, or if it's reliable, it is costly because the server has to maintain all versions of code just so older clients can use it. Newer versions of code include bug fixes and other enhancements, so it may be counter-productive for a client to insist on using old buggy code anyway. It may be much more beneficial to take a step back to really think about what our communication points on the web look like. This is illustrated in the next diagram. In the diagram, I still use the terms "server" and "client" because that's what everyone is still familiar with, but I would prefer the terms "IO node" for each point. This picture zooms out of the previous model to think about many IO nodes on a given network. Here's how to view this model: - Each line represents bi-directional IO - Each client and server can be thought of as IO nodes - Each IO node can emit or listen for events at any given time. Therefore, each node can have it's own API it wishes to expose at any given point in time. Yes, the client can have an API. - Since those events are known at run-time, each side can communicate the events it can emit and listen for; i.e., each node can communicate its API. This means if a foreign IO node makes an appearance, indicated by "server 3", it can communicate its API to any or all nodes, and those nodes will know how to communicate with that new node, all without having prior knowledge of its API. - More importantly though, each node can communicate its node type, so that if the two nodes are identical, they can be considered peers and it can be deduced that peers must already know each others APIs. - This model is only as robust as the API format that all sides must have to agree on, but if the format is simple, it can work! A small digression I like to think of the client and server as being separated by great physical distances. Indeed this is already true as communication has to travel across long cables, bounce of satellites, etc. The response a client can get from a server should be expected to take some time. However, I like to take a bit more extreme view. I like to think of the client as someone traveling to a completely different planet, Mars or Pluto for example. That client will be even further away and for her to survive, she must constantly communicate back with IO servers back on Earth. In the years of her astronomic travels, more than likely both sides of this communication will morph in some way, and both sides will have to adapt to each others communication. Our beloved astronaut will not have the luxury of familiarizing herself with the latest API docs, she will simply have to make do with whatever the server sends her. What she observes as "latest API" will from Earth's perspective already be a few versions old (physics), so maybe if the server can maintain only a few prior versions, she'll have a chance at surviving. This may be an extreme model, but one that can still apply to our web's constantly changing needs and APIs. And when the time comes to travel to distant planets, we'll be prepared. The KISS Dynamic API Format If I can reference an old, but worthy acronym from the 60s, "KISS", "The KISS principle states that most systems work best if they are kept simple rather than made complicated; therefore, simplicity should be a key goal in design, and unnecessary complexity should be avoided." -Wikipedia This is the design goal for what I devised as the "KISS Dynamic API Format". If the high-level format description cannot fit onto a Post-it® note, it will have failed the KISS principle. At a high-level, the KISS format looks like this: At the highest level, the format is simple: each IO node specifies it's label and version. If a given node communicating presents the same label and version as another node, it can be considered a peer, at which point, that node would not need any extra information. Peers already know each others abilities. Nodes that are not peers, however, would require more information: supported events and methods. (NOTE: the focus of this discussion is the IO model. A separate security model could possibly be implemented to help validate that IO nodes are who they say they are) If any of the nodes evolve, they must update their API, and communicate this new API with an updated version. Then, an IO node receiving this information can choose to update its API cache if it detects a version mismatch. If a label is not specified, the client will just have to rely on it's own alias to use for that API. Since the client already knows the domain, port, and namespace it is communicating with, it can be a straightforward manner for it to create whatever aliases it wants (e.g., apis['localhost:8080/chatRoom']). If a version is not specified, the client will always have to always assume a version mismatch and request the full API payload at the start of each new connection; i.e., the client won't be able to rely on or take advantage of an API cache. Therefore, while versioning is optional, it is highly recommended. Each node can have its own set of events and methods. "evts" means the node will emit those events, while "methods" means the node will listen for those events (and run its own methods of the same names, respectively). KISS: The "evts" format Let's drill-down to the "evts" format, to see what it can look like: (again, must fit on a Post-it®) Here, the "evts" will take the following form: A JSON object where the object properties are the event names, whose corresponding values are also optional JSON objects, but highly recommended. This makes it easy to write multiple events and keep things organized by event. Each event name points to a JSON object containing the following optional, but highly recommended, properties: - methods: an array of strings, each string represents the method name emitting that event. This makes it easy for the receiver to organize event data by method name, in case different methods emit the same event. If omitted, the receiver would have to cache the emitted data in a more general, less organized, way. - data: the schema that the client can expect to receive and use to validate incoming data. It is recommended that default values are used in the schema, since those values also indicate the data type (in Javascript, typeof (variable)tells us the type for primitives). This makes for simpler and more readable code, in my opinion. - ack: a boolean indicating whether or not the emitted event expects to be acknowledged. (This may or may not be needed, to be explained in a follow up article. It may be useful to know however, if code is blocking while waiting for an ack, when an ack will never get sent). KISS: An example using "evts" format In this example, this API has label "mainServer" and is at version 1.02. It will emit the events "itemRxd" and "msgRxd". A client can expect that the methods emitting "itemRxd" will either be "getItems", "toBeAdded" or neither. It's up to the server to still specify the method that emitted that event so that the client can organize its data correctly. When the server emits "itemRxd", the client can expect the data JSON to contain "progress", which is specified as type Number (defaulted to 0), and the "item", which is specified as type Any (and defaulted to an empty object). In this way, both the type and the default value are represented in a simple and compact way. As time goes on, the server may wish to make "item" of type "Item", instead of "Any", to help the client validate each item (ex: Item schema = { name: '', description: '', unitCost: '' }). Here is an example: function getItems(msg){ socket.emit( 'itemRxd', // event: 'itemRxd' { method: 'getItems', // specify the method so the client can organize it. data: { progress: 0.25 // getItems method is 25% complete, notify the client... item: { name: 'milk' } } } } The other event is "msgRxd". This entry doesn't specify any method, only the schema for the data. The client can expect to receive the "date" and the "msg". Since no methods are specified, the client can expect the event to come from any or all methods on the server. KISS: The "methods" format While the "evts" container describes the output of a given node, the "methods* describe the input to that node, and what the corresponding response can be. This is what the format can look like: The format is a JSON object, where the properties represent the supported method names. Each method name points to a corresponding JSON object, which describes: - msg: the message schema that the receiving node expects (a "msg" JSON object) - resp: the response schema the node expects to respond with, if any. If the response specifies a schema surrounded by square brackets, that specifies an Array of that schema. One potential benefit of providing these schemas in real-time could be automatic UI creation; that is, certain types could help determine what UI elements are best suited for those types, especially if the types are primitives. For example, if a given msg schema specifies String and Number types, the String types could translate to <input type="text" /> while Number types could translate to <input type="number" />. Entire form controls can probably be created on-the-fly in this manner. Likewise textual responses can probably be attached to <div class="resp"></div> elements. Styling could still largely be handled by CSS. KISS: An example using "methods" format In this example, the API specifies two methods, "getItems" and "getItem". The "getItems" does not specify a "msg" schema, so "msg" can be anything (or nothing) because it will be ignored. The method will only return an Array of type "Item". The Item schema is defined as a JSON object of "id", "name", and "desc", all empty strings (type String). The "getItem" method, however, specifies a "msg" schema, a JSON object with a property "id" and format String (defaults to an empty string). When the client calls this method, the server expects that the client will provide an id of the correct type (String). It will respond with type Item. Conclusion Presented here was a lengthy, but hopefully not too confusing, discussion of how APIs can be made dynamic, so that they can adapt to changes made by both sides of a communication channel. This will most likely be a very new concept for many people, so my next article will describe the exact implementation of this, which will release with nuxt-socket-io v1.0.22. That article will try to explicitly highlight the benefits using concrete examples. Expect pain points at first, because it is a learning curve, but I am hopeful we'll both glad after climbing the curve (yes, we're climbing the curve together). Discussion (0)
https://dev.to/richardeschloss/re-thinking-web-apis-to-be-dynamic-and-run-time-adaptable-31ek
CC-MAIN-2021-25
refinedweb
2,321
66.07
Home › Forums › .NET libraries › Xceed Zip & Real-Time Zip for .NET › READ ME: Latest notes and beta releases for Xceed Zip for .NET - AuthorPosts - Xceed SupportMemberApril 20, 2010 at 7:20 pmPost count: 5658 Beta builds From time to time, we will post beta builds of an upcoming service release here. We will do so if we feel the included bug fixes will affect and interest a significant number of clients. You can be automatically notified of new beta builds by clicking the ‘Enable Email Subscription’ button above. The beta builds are removed once they become the general release. Although these builds are tested and fully functional, use them at your own risk. They are not full releases and only contain the component assemblies. Below is a list of changes included in this beta build. If you’re affected by one of the issues below, download the beta version and see if it resolves your issue: - Fixed a bug where GZipArchive did not always validate the data’s checksum when decompressing. [163849] - Fixed issues with LZMA that caused fatal exceptions when run on .NET framework 4.5.1 and later. [163634] - SFtp files are now seekable. This allows better interoperability with Zip.NET. For example, selected items in a remote zip file accessed via a SFtpFile can be extracted to a local folder without downloading the entire archive. [163462] - ZippedFile and ZippedFolder constructors now only look for invalid characters in the name if the item doesn’t exist in the archive. An item already in the archive is always accepted. [163624] - Setting the name of a ZippedFile and ZippedFolder no longer uses the System.IO.Path namespace as it checks for invalid characters for Windows. It now uses Xceed.Utils.Paths.Path.GetFileNameSimple(), which is universal. [163624] - Fixed a bug where a ZipIOException was thrown when unzipping empty files that use WinZipAES and Stream.Read() is never called (directly or indirectly) on the file. [163671] No release is scheduled at this time. You can get the beta build for version 5.8, which targets the .NET Framework 4.0 and later here: Xceed.Zip.NET.5.8.15522.0.zip You can get the beta build for version 4.9, which targets the .NET Framework 2.0/3.5 and lower here: Xceed.Zip.NET.4.9.15522.0.zip You can always get the official full release on the main website. But that release does not contain the fixes described here. Known issues being worked on - None at this time. Update history of this post - October 22th 2015: Posted build 15522.0 for versions 4.9 and 5.8 for GZip fix. - October 20th 2015: Posted build 15520.0 for versions 4.9 and 5.8 for LZMA fixes in .NET 4.5.1+, SFtp, ZippedFile/ZippedFolder Imported from legacy forums. Posted by Jb [Xceed] (had 6392 views) - AuthorPosts - You must be logged in to reply to this topic.
https://forums.xceed.com/forums/topic/READ-ME-Latest-notes-and-beta-releases-for-Xceed-Zip-for-NET/
CC-MAIN-2021-25
refinedweb
490
70.09
The spec says: "If port is a port to which the user agent is configured to block access, then throw a SECURITY_ERR exception. (User agents typically block access to well-known ports like SMTP.)" Created attachment 44178 [details] proposed patch Comment on attachment 44178 [details] proposed patch Dave Levin reminded me that SecurityOrigin is not in platform/, so this is a layering violation. Thinking... Created attachment 44191 [details] proposed patch style-queue ran check-webkit-style on attachment 44191 [details] without any errors. Comment on attachment 44191 [details] proposed patch > + Move isDefaultPortForProtocol() to KURL, because that's a better place for it (SecurityOrigin > + is not even in WebCore/platform directory). I think KURL.h is a good source file for this. But I would slightly prefer a free function to a static member function. > + typedef HashMap<String, unsigned> DefaultPortsMap; > + DEFINE_STATIC_LOCAL(DefaultPortsMap, defaultPorts, ()); > + if (defaultPorts.isEmpty()) { > + defaultPorts.set("http", 80); > + defaultPorts.set("https", 443); > + defaultPorts.set("ftp", 21); > + defaultPorts.set("ftps", 990); > + } Is it safe to use a case-sensitive map for this? Do callers all lowercase the protocol first? Should we assert that the passed in string has no uppercase ASCII letters in it? > +bool KURL::portAllowed(const KURL& url) This should be an ordinary member function, not a static member function. Or alternatively it could be a free function. > + // If the port is not in the blocked port list, allow it. > + if (!std::binary_search(blockedPortList, blockedPortListEnd, port)) > + return true; I'd like to see "using namespace std" rather than "std::binary_search". I'd like to see an assertion that the list is sorted properly, because it seems it would be easy to accidentally add a new port and not keep the list sorted. I'm going to say review- because I think you should do at least one of the things I suggest above. If you don't agree, feel free to put this same patch up for review again. Created attachment 44255 [details] updated patch > Is it safe to use a case-sensitive map for this? Do callers all lowercase the > protocol first? Should we assert that the passed in string has no uppercase > ASCII letters in it? They do, but other protocol-related functions in KURL.h allow non-lowercase input, so for consistency, this one should likely do so, too. An assertion would be a very weak defense, as it won't fire before the problem actually occurs, which is unlikely to happen in testing. > I'm going to say review- because I think you should do at least one of the > things I suggest above. Most or all of these comments are about moved code, but I'm cool with addressing them, as long as the patch doesn't get rejected for having changes not related to its main purpose :) Attachment 44255 [details] did not pass style-queue: Failed to run "WebKitTools/Scripts/check-webkit-style" exit_code: 1 WebCore/platform/KURL.cpp:637: Tests for true/false, null/non-null, and zero/non-zero should all be done without equality comparisons. [readability/comparison_to_zero] [5] Total errors found: 1 I think that strcmp comparisons should be an exception to the rule, but I can change this when landing if a reviewer tells me to. style-queue false positive filed:. Thanks. Comment on attachment 44255 [details] updated patch > // JavaScript URLs are "valid" and should be executed even if KURL decides they are invalid. > // The free function protocolIsJavaScript() should be used instead. > - ASSERT(strcmp(protocol, "javascript") != 0); > + ASSERT(strcasecmp(protocol, "javascript") != 0); I think that it's not good for us to use strcasecmp since, like the <ctype.h> functions, it depends on the POSIX locale. While I don't think it's urgent to do this, I think we should eventually outlaw strcasecmp in WebKit code with a technique similar to the functions like islower. On way to rewrite our existing uses of strcasecmp would be to overload the equalIgnoringCase function in PlatformString.h so it can work on two C-style strings. However, this slightly conflicts with a different idea. Dan Bernstein and I discussed making the C-style string arguments to functions like equalIgnoringCase be designed for string literals. So instead of case folding such strings, it would assert they have no non-ASCII characters or uppercase ASCII characters in them, along the lines of what is done in the protocolIs function in KURL.cpp. Sorry for the long aside -- not really relevant to this bug. > + DEFINE_STATIC_LOCAL(DefaultPortsMap, defaultPorts, ()); > + if (defaultPorts.isEmpty()) { Sometimes I wish we had a better way to initialize maps than an explicit isEmpty check. > +bool isDefaultPortForProtocol(unsigned short port, const String& protocol); > +bool portAllowed(const KURL&); // Blacklist ports that should never be used for Web resources. Two thoughts about these: 1) Longer term we might want to move these URL policies *slightly* higher level than then URL class itself. Another source file in the platform directory rather than URL.h itself. Perhaps even in the networking subdirectory of platform. 2) The second function probably is easier to understand if its sense is reversed, since the concept is a blacklist. The name could be something like hasForbiddenPort or portShouldBeBlocked; maybe even just portBlocked. One other subtle point is that it's not ports that are blocked, it's specific port/protocol combinations, so ideally the very short name would reflect this. Maybe hasForbiddenPortProtocolPair -- well, I'm sure we could do better than that. We might think of some even better name. Since the function is still used in only a small number of places I think it's practical to rename it later, so I'm not to worried about landing it with any of these names. > - LOG_ERROR("Error: wrong url for WebSocket %s", url.string().utf8().data()); > + LOG(Network, "Wrong url scheme for WebSocket %s", url.string().utf8().data()); This change means that you think most people are not interested in the error. LOG_ERROR is used for situations so unusual that anyone using a debug build would want to see a message on the console. Whereas the LOG(Network) variant is for things that someone explicitly wants to turn on. It's normally used for logging that occurs even in non-failure cases. I think that neither LOG_ERROR nor LOG is really what we're after. The people most interested in these types of errors are probably using the console in the web inspector, so the big win is hooking this up to that, something the people working on WebSocket have discussed. Lets make sure we do it. I think I slightly prefer LOG_ERROR here, though. r=me Committed <>. Filed bug 32165 about error logging.
https://bugs.webkit.org/show_bug.cgi?id=32085
CC-MAIN-2020-05
refinedweb
1,104
56.35
Implements the BitmapData native type. More... #include <BitmapData_as.h> Implements the BitmapData native type. All functions can be called if the BitmapData has been disposed. Callers do not need to check. Construct a BitmapData. The constructor sets the immutable size of the bitmap, as well as whether it can handle transparency or not. Referenced by gnash::Bitmap::construct(). Free the bitmap data. Whether the BitmapData has been disposed. Referenced by begin(), gnash::Bitmap::Bitmap(), end(), fillRect(), and getPixel(). Fill the bitmap with a colour starting at x, y. Negative values are handled correctly. References disposed(), height(), and width(). Returns the value of the pixel at (x, y). Returns 0 if the pixel is out of range or the image has been disposed. References disposed(), height(), and width(). Return the height of the image. Do not call if disposed! References gnash::image::GnashImage::height(). Referenced by fillRect(), and getPixel(). Set a specified pixel to the specified color. Retains transparency value for BitmapDatas with transparency. Set a specified pixel to the specified color. Overrides Relay::setReachable(). Reimplemented from gnash::Relay. Referenced by gnash::Bitmap::markReachableObjects(). References gnash::image::TYPE_RGBA. Return the width of the image. Do not call if disposed! References gnash::image::GnashImage::width(). Referenced by fillRect(), and getPixel().
http://gnashdev.org/doc/html/classgnash_1_1BitmapData__as.html
CC-MAIN-2014-15
refinedweb
208
55.81
Post your Comment Student average to maintain this status is to maintain an average not below 1.8. write a java program.... your program should display and the number of units of each subject. your program should display the average grade of the student and would display the message calculate average and write a program and calculate your final average in this course. The details of the calculating can be found in the course syllabus. Test your program...calculate average Design and write a java program to determine all calculate average calculate average Question 2 cont.d Test case c: home works/labs 10 9 9 9 10 10 9 9 10 10 9 9 10 test scores: 65 89 note, the program should work with different numbers of home works/labs Hibernate criteria average Example Hibernate criteria average Example How to find the average in Hibernate? Thanks Example of finding average on a filed in Hibernate using criteria. For this you have to use Projections.avg() in your program. Check Calculate sum and Average in java Calculate sum and Average in java How to calculate sum and average in java program? Example:- import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class AvarageTest Average of Array Average of Array In this section, we will learn how to get an average of array. For this, first of all we have to define a class name "ArrayAverage" that has double Average Average I have created a file that reads in a text file for a combo..., assignments, etc.) so I can have the average of each item. I then need to graph... java.io.FileReader; /** * * @author Owner */ public class MainScreen extends Finding Average in JSP Finding Average in JSP In mathematics, an average is a measure of the middle of the data set. The average value is the total amount divided by the total Class Average Program Class Average Program This is a simple program of Java class. In this tutorial we will learn how to use java program for displaying average value. The java instances Hibernate Criteria average example Hibernate Criteria average example - Learn how to perform Hibernate Criteria Average Query Here we will use the avg() method of org.hibernate.criterion.Projections factory for getting the Projection instance for performing the average Highest average score among 25 students Highest average score among 25 students a program that prompts the user to enter grades of 25 students in a class and grade them into 4 lessons... again the grade) calculate average per student and will calculate the highest how to find [count the number of integers whose value is less than the average value of the integers] of integers whose value is less than the average value of the integers. Your program is to display the average integer value and the count of integers less...how to find [count the number of integers whose value is less than the average Average Age of my Class? Average Age of my Class? average age of my class SQL Average SQL Average SQL Average, the part of Aggregate Function. SQL Average is used to compute average value of the records in a table of the database. Understand How to calculate the average in Hibernate? How to calculate the average in Hibernate? Hi, I have to calculate the average in Hibernate. How to calculate the average in Hibernate? Thanks... of calculating the average on the fee field in database: Projections.avg("fee PHP Get Average - PHP PHP Get Average I am writing a method to calculate the average rating of the posted answers? can anyone help me with the method to calculate rating in PHP. In my code, we are providing maximum four options to the user to rate Calculate an average Age of my Class? Calculate an average Age of my Class? average age of my class java program java program write a java program to display array list and calculate the average of given array array average across column array average across column How can I find the overall averages of the columns in this array? when i run my code it gives me an output that looks like this (showing me wrong averages): How can I find the overall averages MySQL Average Command MySQL Average Command This example illustrates how to execute the Average command in MySQL. In this example we create a select query to find the average of 'lastAccess' field. Query   SQL Average Count SQL Average Count Average Count in SQL is used to count the aggregate sum of any field... example ,which helps you to calculate the average count of any records specified java program java program write a java script program that would input the ff:Student Name,Average,Tuition Fee and output Total Tuition Fee. Formula: Total Tuition Fee=Tuition Fee-Discount If average is: 95-100 100% discount 90-94 25% 85-89 java program java program write a java script program that would input the ff:Student Name,Average,Tuition Fee and output Total Tuition Fee. Formula: Total Tuition Fee=Tuition Fee-Discount If average is: 95-100 100% discount java program java program write a java program to create an array of size 10 by taking input from bufferreader and find out the average of array elements from that array program in c program in c Write a program that inputs five different integers from the keyboard, then print the sum, the product, the smallest and the largest... 2 7 4 6 Sum is: 23 Average is: 4.5 Smallest: 1 Largest 7 Post your Comment
http://www.roseindia.net/discussion/21591-Class-Average-Program.html
CC-MAIN-2014-52
refinedweb
937
52.39
In order for a C++ program to use a library compiled with a C compiler, it is necessary for any symbols exported from the C library to be declared between `extern "C" {' and `}'. This code is important, because a C++ compiler mangles(7) all variable and function names, where as a C compiler does not. On the other hand, a C compiler will not understand these lines, so you must be careful to make them invisible to the C compiler. Sometimes you will see this method used, written out in long hand in every installed header file, like this: #ifdef __cplusplus extern "C" { #endif ... #ifdef __cplusplus } #endif But that is a lot of unnecessary typing if you have a few dozen headers in your project. Also the additional braces tend to confuse text editors, such as emacs, which do automatic source indentation based on brace characters. Far better, then, to declare them as macros in a common header file, and use the macros in your headers: #ifdef __cplusplus # define BEGIN_C_DECLS extern "C" { # define END_C_DECLS } #else /* !__cplusplus */ # define BEGIN_C_DECLS # define END_C_DECLS #endif /* __cplusplus */ I have seen several projects that name such macros with a leading underscore -- `_BEGIN_C_DECLS'. Any symbol with a leading underscore is reserved for use by the compiler implementation, so you shouldn't name any symbols of your own in this way. By way of example, I recently ported the Small(8) language compiler to Unix, and almost all of the work was writing a Perl script to rename huge numbers of symbols in the compiler's reserved namespace to something more sensible so that GCC could even parse the sources. Small was originally developed on Windows, and the author had used a lot of symbols with a leading underscore. Although his symbol names didn't clash with his own compiler, in some cases they were the same as symbols used by GCC.
http://sourceware.org/autobook/autobook/autobook_48.html
CC-MAIN-2013-20
refinedweb
314
53.75
WiPy 2.0 Using boot.py to go into Station mode Has anyone managed to create a boot.py that puts a WiPy 2 into Station mode, similar to section 3.2 in here ? Thanks, Jim @abilio I can confirm that my boot.py survives rebooting and that I'm running the most recent version of the firmware. Indeed, it appears that my boot.py is running, and that I had a syntax error on the line: server = Server(login=('user', 'password'), timeout=60) where I had specified a new user/pwd pair. I had thought that my boot.py wasn't running because the device came up with a server running with uid, pwd = 'micro', 'python', the default. Strange. @pwest, if you connect to the board, you can do: import os os.uname() There you'll find the release number. Currently, the latest version is 0.9.2.b2. @abilio I'm nearly certain that I've verified the file contents survives the reboot, but I'll re-verify tonight. I certainly attempted to update the firmware to latest version (first attempt linux, fail, second attempt Windows 7, success), but HOW DO I VERIFY FIRMWARE VERSION, and what is the most recent? Thanks, Phil @pwest, can you confirm the file content is still there after reboot? Did you upgrade your firmware to the last version? I've been trying to get this to work, and it seems that no matter what changes I make to boot.py, the system boots into AP mode and appears with its telnet/ftp server running at 192.168.4.1. I'm editing boot.py locally and then ftp'ing it up to the /flash directory. Under the ftp application, I can view the file and verify that my new file has been uploaded, but when I reboot, it's always back in AP mode. Could it be the case that I'm running in one of the factory recovery modes instead of running the intended user code? Thanks . That looks like what I was doing, in essence,, so I'll try again! I'm using firmware updated this morning. - AgriKinetics last edited by We had the problem with wlan.ifconfig failing until the latest firmware was installed correctly. Our quick and dirty boot.py now works fine thus: import machine# boot.py -- run on boot-up import os from machine import UART uart = UART(0, 115200) os.dupterm(uart) import machine from network import WLAN wlan = WLAN() # get current object, without changing the mode wlan.init(mode=WLAN.STA) wlan.ifconfig(config=('ip address', 'mask', 'gateway', 'dns')) wlan.connect('SSID', auth=(WLAN.WPA2, 'key'), timeout=5000) Many thanks. I got mine going in Station mode by dropping the 'wlan.ifconfig' completely, and accepting the IP address that it comes up with for now, so I can work fairly reliably in Station mode. By the way, I also got it to survive a soft reset (control+D) by changing the line that read: if machine.reset_cause() != machine.SOFT_RESET: for my boot.py for the WiPy 1.0 to simply: if wlan.mode() != wlan.STA: for the WiPy 2.0. As you say, there seem to be some differences in the calls, and I'm pretty sure the online docs aren't quite correct. And where on earth is 'machine.ADC' or its new equivalent? (And how are you doing your lovely code formatting please? I'm not used to this forum yet!) Jim I got mine working by dropping that test. Instead of a fixed IP I used dynamic and gave a fixed address from my DHCP server, like so: wlan.ifconfig(config='dhcp'). I also thought that the wlan.init may have changed, I had to include the mode= as part of the function call. My full boot.py looks thusly: import os import machine from network import WLAN wlan = WLAN() wlan.init(mode=WLAN.STA) wlan.ifconfig(config='dhcp') if not wlan.isconnected(): wlan.connect('xxxx', auth=(WLAN.WPA2, 'xxxx'), timeout=5000) while not wlan.isconnected(): machine.idle() from network import Server server = Server() server.deinit() server.init(login=('xxxx', 'xxxx'), timeout=6000) I find it doesn't work if I attempt to set the IP address with something like: wlan.ifconfig(config=(IP, SUBNET, GATEWAY, DNS_SERVER)) Also, is there a workaround for the non-availability of 'machine.SOFT_RESET'? Thanks, Jim
https://forum.pycom.io/topic/75/wipy-2-0-using-boot-py-to-go-into-station-mode/?page=1
CC-MAIN-2019-13
refinedweb
730
68.47
import "github.com/shurcooL/go/gddo" Package gddo is a simple client library for accessing the godoc.org API. It provides a single utility to fetch the importers of a Go package. type Client struct { // UserAgent is used for outbound requests to godoc.org API, if set to non-empty value. UserAgent string } Client manages communication with the godoc.org API. GetImporters fetches the importers of Go package with specified importPath via godoc.org API. Importers contains the list of Go packages that import a given Go package. type Package struct { Path string // Import path of the package. Synopsis string // Synopsis of the package. } Package represents a Go package. Package gddo imports 3 packages (graph) and is imported by 7 packages. Updated 2018-11-18. Refresh now. Tools for package owners.
https://godoc.org/github.com/shurcooL/go/gddo
CC-MAIN-2019-13
refinedweb
131
69.79
>>IMAGE just downloading apps and plugging them in. That’s one of the many reasons we use Django here at Caktus: we can build useful web sites for our clients more quickly by not having to re-invent the same building blocks all the time, and focusing on the unique value-add for each client. This is also one of the attractions of building and releasing open-source Django apps: they’re easy for other people to use, and having other people use your code is very satisfying. But Django apps don’t become easily pluggable automatically, and it’s easy to do things in a way that make it much harder for other sites to use an app. A book could be written about this topic. This post just gives examples of some areas that can cause problems, and might stimulate some thought when designing your app. Not everything needs to be an app Does your package have to be an app at all? If a package can provide useful features without being added to INSTALLED_APPS, that’s the way to go. There are some things that Django only looks for in apps, like models and custom template tags. If those are needed, you’ll have to make it an app. Features of highly pluggable apps - Can be installed anywhere on the Python path, ideally just using “pip install package-name”. - Does not require the site to have any explicit knowledge of what directory the app ended up in. (It’s not necessary to add the app’s install directory to a setting or anything like that). - Installing and configuring the app in a site does not break any existing function of the site. - The app can be upgraded by just installing the newer version, and possibly running migrations. (The site might have to make changes to take advantage of new features, of course, but an ideal app adds features without changing the behavior of existing features, apart from bug fixes.) - As a corollary to some of the previous features, installing or upgrading the app doesn’t require copying files or sections of code from the app or its documentation into the site. - Can be customized without forking the code. Assumptions that can be made about the site It’s best if the app can avoid assumptions about how the site does things. If it does matter, assume the site does things in the most straightforward, default way. Those who have customized their site behavior away from how Django does things out of the box presumably have the know-how to cope, or have accepted that they won’t be able to use some pluggable apps without pain. For example, if the app provides static files, assume the site uses the Django staticfiles app, and provide the static files in a way that simply installing the app will make the static files available through that app. Templates Template tags are a good way to provide enhanced features that a site can use in its templates. If a template tag just provides some data, and the site can embed it, lay it out, and style it however it wants, that makes it very easy to use. If a template tag really needs to return its data with markup to be useful, then consider using a small template to format the data, and documenting it. The site can provide an overriding template to change the formatting without having to change your app. If your app serves whole pages that need templates, then provide basic templates that inherit from ‘base.html’ and put the content in a ‘content’ block. Most sites have a base.html template that provides the basic page framework and styling around a content block anyway, or can easily add one. And a site can always copy and override the app’s example templates a lot more easily than writing templates from scratch. Models When designing the models for your app, keep in mind that you can use Generic Foreign Keys to link to any model. The comments app that used to come with Django provides a good example. Provide migrations for your models, and make sure as your app evolves, the migrations still work to get from your initial state to your latest release. That helps to make upgrading your app as painless as possible. Settings If possible, any settings you define for your app should not be required, and should have reasonable defaults. If a setting is required and cannot have any reasonable default, then check for it and give a useful error message if the user has forgotten to define it. Since Django’s settings occupy a single namespace, it’s a good idea for any new settings required by your app to start with the app name. APPNAME_TIMEOUT is much less likely to conflict with some other app’s setting than TIMEOUT. Some apps reduce the number of names they add to the Django settings by just using a single setting whose value is a dictionary with all the more specific settings. E.g. APPNAME_SETTINGS = { ‘TIMEOUT’: 29, ‘PATH’: ‘/foo/bar’, } I’m not sure if that’s a good idea or not, but it’s something to consider. Dependencies If your app has dependencies, they should be configured in setup.py so that when a site “pip installs” your app, its dependencies will be installed automatically. Configure minimum versions of the packages your app depends on, but try not to pin them to a specific version. If your app requires “foobar==1.0.3” and another app the site uses requires “foobar==1.0.4”, that’s going to cause a headache for the user. If your app requires “foobar>=1.0.3”, there’s no problem. If you want to limit the versions, you can use a specification like “foobar>=1.0.3,<2.0” so that if the foobar package releases a 2.0 version, your app won’t try to use it until you’ve had a chance to test with it and release an update. Further reading The Django stable documentation has a tutorial on Reusable Apps. It focuses more on packaging than design, so it’s a good complement to this post. Daniel Greenfeld and Audrey Roy’s book Two Scoops of Django: Best Practices for Django has a section What Makes a Good Django Package? with good hints on making a useful, reusable app. It’s adapted from their DjangoCon 2011 talk, “Django Package Thunderdome: Is Your Package Worthy?” so you can get some of the ideas from those slides (but the book has a lot of other useful things in it, I recommend it as a whole). James Bennett’s Practical Django Projects has a chapter on writing reusable Django applications. It discusses the philosophy of what makes a good reusable app, and then gives some examples of handling specific coding issues of plugging in apps, at much greater length that we can here.
https://www.caktusgroup.com/blog/2013/06/12/making-your-django-app-more-pluggable/
CC-MAIN-2020-24
refinedweb
1,166
69.52
Build a Nintendo emulator using a Raspberry Pi 2 Today we'll build an extremely portable retro gaming console that emulates some classic consoles including Nintendo Entertainment System (NES), SNES, Sega MegaDrive and others. We will be using a Raspberry Pi 2 and the RetroPie software. Requirements - Raspberry Pi 2 - Raspberry Pi 2 case - Micro SD Card (8Gb+ recommended) - USB controller (we’re using the excellent NES30 controller) - USB keyboard - FTP client - Ethernet cable (local network access only) - HDMI cable and TV/Monitor Step-by-Step Guide (OS X) Getting the software Start by downloading the latest build of RetroPie from PetRockBlock.com. We'll save it to a ~/Downloads/RetroPie directory on our MacBook Pro. The latest stable release is v2.6 but we’ll be installing RetroPie 3.0 Beta 2. Make sure that you download the correct build depending on whether you're using a Raspberry Pi 1 or 2. The Raspberry Pi 2 is highly recommended for the extra CPU power which comes in handy to get emulation without lag and is a requirement for some of the more demanding consoles such as N64. The RetroPie distributable is usually compressed so unpack the archive to a .img file after the download is complete. Installing RetroPie If you aren't keen to use the command-line on OS X or perhaps use Windows instead there are some programs that can help with the progress. For Windows use Win32 Disk Imager Insert the Micro SD Card into your Mac and open up your Terminal program (I use iTerm2 but OS X comes with a built-in Terminal). We need to start off by finding the device node of the Micro SD Card. Use a built-in command-line utility called diskutil to get a list of all current disks attached to your machine. From the open Terminal window run: diskutil list You’re looking for the entry that matches your Micro SD Card. In the screenshot we can see our SD Card is at /dev/disk2. Make note of the device node and be very sure you have the correct one. Now that we have the device node, we need to unmount the SD Card by running: diskutil unmountdisk disk2 Replace disk2 above with your own device node leaving off the /dev/ prefix. Still in the terminal, navigate to the directory to which you previously saved the RetroPie img file. cd ~/Downloads/RetroPie Next we’ll use a command to load the RetroPie image onto the Micro SD Card. sudo dd if=retropie-v3.0beta2-rpi2.img of=/dev/rdisk2 bs=1m - if = Input File - of = Output File - bs = Block Size - Make sure to replace retropie-v3.0beta-rpi2.img with the actual name of the img file you previously uncompressed. Note that we use /dev/rdisk2 instead of /dev/disk2. Specifying rdisk writes directly to the disk and skips the write to buffer which speeds up the entire copying process. It’s important you specify the correct output location or you may overwrite data on your primary hard drive and you really don't want to make that mistake! When you hit Enter the copy process will start running but it won’t show any output to the terminal window. It can take anywhere between 5 and 20 minutes to complete so be patient. If you’re concerned the process is taking too long, you can hit Ctrl-T to get the current latest transfer data output to screen. You can do this as often as you want until the image copy is complete. Once it’s complete verify the install by running diskutil list again. Under /dev/disk2 you should now see the installation of a boot partition as well as a Linux partition sized at approximately 3.5Gb. Don't be too alarmed at the 3.5Gb size as by default it won’t have used the entirety of the SD Card’s available space - this is something we’ll correct once we’ve booted up the Raspberry Pi. Setup the Raspberry Pi Hardware We are now ready to setup the Raspberry Pi itself. If you haven't already, install the Raspberry Pi 2 into a case and plug in the keyboard, USB Controller, network cable and Micro SD Card. Plug it into your TV/monitor via HDMI and turn it on. You will be greeted with a Welcome screen and it should show ==1 Gamepad Detected==. At this point, it will ask you to “Hold a button on your device to configure it”. This process sets up the controller to allow you to navigate the menus of the RetroPie UI. We'll still need to setup the controller to be recognised within the emulators themselves (but more on that later). First-time configuration Once the system has booted and you're inside the GUI, select ==RetroPie== from the main menu (you may need to hit right on the controller a few times). Select ==Raspberry Pi Configuration Tool Raspi-Config== from the list of options. At this point it will boot the Raspberry Pi Config utility. First, select ==Overclock== - you will need to use your keyboard at this config menu. You will see a message that overclocking may reduce the life of your Raspberry Pi. You may ignore this as we're just ensuring we've selected the correct high clock speed to take advantage of all the new and more powerful Raspberry Pi 2 offers. On the 'Choose overclock preset' screen select ==Pi2== and press Enter. You will now be back at the main Raspberry Pi Config menu Second, select ==Expand Filesystem== to kick off the process to use all of the space on your Micro SD Card. By default it will only use the first 4Gb of space. We're using a 32Gb card so we'll definitely want to expand the filesystem to take full advantage of all of that space. Once the expansion has compelted, select ==Finish== from the bottom of the Raspberry Pi Config screen. You will be asked if you wish to Reboot - select Yes. Setup the USB Controllers Earlier when you booted up the RetroPie for the first time it detected a controller and asked you to setup the inputs. This setup was to enable the controller for navigating the RetroPie's UI. We will now go through the process of ensuring our controllers are properly recognised for use within the Emulators themselves. We have a NES30 Controller which is styled like the old NES controller but has the same number of buttons as the SNES controller (including two shoulder buttons). As before, select ==RetroPie== from the UI after booting up but this time select ==Configure RetroArch Keyboard/Joystick==. You will need the keyboard again at this point so use to to select ==Configure controller for use with RetroArch==. This will boot a setup script asking you to press the corresponding button shown on the screen. You have to be quite quick as the delay to press a button isn't very long. If you make a mistake you can just re-run the setup script. Note: Your controller might not have all of the inputs - for example our NES30 does not have any analog sticks. You can safely ignore and wait until the script reaches completion. Once you have completed the setup for the connected controller you will find yourelf back at the RetroPie UI. Repeat the above process for any new controller you may want to use on the RetroPie in the future. We used a NES30 controller since our interest was in NES and SNES games but you can plug a PS4 controller into the console and use that too if you plan to play games requiring analog sticks. Transfer some ROMs Time to transfer some games across! First off get the IP address of your RetroPie by loading up Terminal and pinging it with the following command: ping retropie Load your favourite FTP client (I use Transmit but feel free to use any others such as CyberDuck). Connect via SFTP to the Raspberry Pi’s IP address using the username pi and password raspberry. This will connect you to the /home/RetroPie directory. All ROMs go into the /home/RetroPie/roms directory under the relevant emulator’s directory. In earlier versions, RetroPie sometimes has multiple emulators for the same device but the ROMs directory has been tidied up in 3.x We'll be uploading the excellent Super Mario Bros to the /home/RetroPie/roms/nes directory to get ready to stomp some Goombas. Once you've completed and uploaded all of your games onto the Raspberry Pi, you will need to reboot. Reboot the RetroPie by pressing Start on your controller and selecting Quit -> Restart System. When the RetroPie starts back up you should find an entry in the menu for all of the consoles for which you have uploaded ROMs. Best NES Settings We have found the following to be the best settings when running NES games to reduce input lag as much as possible. Now open a Terminal window and ssh into your RetroPie by running: ssh pi@retropie Once connected you will need to edit the NES retroarch.cfg file located at /opt/retropie/configs/nes/retroarch.cfg Replace the contents of that file with the following configuration: #include "/opt/retropie/configs/all/retroarch.cfg" # All settings made here will override the global settings for the current emulator core input_remapping_directory = /opt/retropie/configs/nes/ video_shader = /opt/retropie/emulators/retroarch/shader/phosphor.glslp video_shader_enable = false video_smooth = false video_vsync = true video_hard_sync = false video_hard_sync_frames = 0 # video_frame_delay = 0 # video_black_frame_insertion = false video_threaded = false video_scale_integer = false video_crop_overscan = false If you're keen to experiment you can find the full list of configration options in /opt/retropie/configs/all/retroarch.cfg Lastly (and you'll need the USB keyboard plugged in for this bit), switch back to your RetroPie itself and start up your favourite NES title (Super Mario, right?). Press x before the game boots to get to the emulator config menu. Inside here we found the best settings for NES emulation was to use ==lr-fceumm== as the emulator using a video mode of ==CEA-1== (640x480 without any enhancements). The good news is that all of the above will only need to be done once and will be active permanently. Do you have better configuration options to get the emulation as close to the real NES as possible? Please leave recommendations in the comments below. Some handy tips - IMPORTANT TIP: Make sure you are using the correct power output on the Micro USB (5V/2.1A) otherwise the Raspberry Pi 2 won't get enough to take advantage of all that extra power. When I accidentally used a 5V/1A Star Fox on SNES ran at horrendous speeds but using the correct power was 100% speed emulation. - As already stated, press xon a plugged in keyboard when starting a game to get to a special menu. You can use this to tweak the resolution and visual effects to optimise the emulator's settings to get the best performance. - When inside a game, press Start + Selectto quit back to the main RetroPie menu. - To safely turn off your RetroPie, press starton the main RetroPie menu to Quit -> Shutdown System when you're done playing. The final NES emulator You should now have a fully working mini console to play all the games you loved as a kid. We've only loaded up games we already own the cartridges to with Super Mario Bros and Zelda taking the top Must Play Again spot. Unplug that clunky USB keyboard as only the controller is required from this point onwards. Happy Gaming!
https://www.39digits.com/build-a-nintendo-emulator-using-a-raspberry-pi-2/
CC-MAIN-2021-25
refinedweb
1,954
69.72
Username: lost p/w? | | subscribe | search | Music News Forum Community Services Subscribe Lyrics Chat Games headlines submit search A Discmaker's E-Mail Ad on "Why indies make CDs" Posted by Mike (Shmoo) in on November 7, 2008 at 12:40 AM Source/Link (I think) can be found at: 9 ways releasing a CD can help your career in the current music industry environment. By Andre Calilhanna | November 2008 Ron Rip DigipakThe news surrounding the music industry these days might make you wonder if anyone is buying CDs any more. Major-label CD sales are down again. Downloads are up. So the question on the table is: As an independent artist, do you really need to make CDs these days? There are many factors to consider, and what is true for major-label artists does not often translate to independents. As a matter of fact, amidst the last few years of continuously declining major-label CD sales, Disc Makers has seen continued growth in new CD jobs ordered, and indie-only CD Baby has seen consistent increases in CD sales. It speaks to the fact that one revenue model does not fit all markets, and the ingredients for success for a major-label artist vs. an independent are simply not the same. I know what many of you are thinking. “Of course Disc Makers is going to tell me I need to keep making CDs!” Yes, we are a CD manufacturing company, and that gives us a particular stake in the subject. It also gives us a unique perspective, and a front-line view of what the demands of the market and our client base are. In addition to that, we’ve leveraged some of our great connections within the music industry and reached out to gain insight and commentary on the question of the viability of the CD format. Here are a few things to consider. 1. CDs legitimize you. Imagine you need the services of an attorney, and you meet someone who claims to be one. You ask for his card, and he tells you, “Oh, I never got around to printing business cards. Got a pen? Got a napkin?” Could you take this person seriously? Just as a business card is the most basic element to legitimize a business person, a CD is the most basic way to legitimize you as an artist. What major music artist doesn’t have a CD? Physical product, and in this case CDs, demonstrate that you as an artist are committed to your career. Giving a music business professional a CD is the fastest way to get them to listen to you and take you seriously. Don’t make them work to hear your music! 2. CDs are an integral part of the indie revenue stream. Getting paid good money by a club or promoter to play a show is a difficult prospect. So if you’re on the road, even for a weekend jaunt, you need to have something tangible to sell to help increase your take at a gig. Download cards can and should be sold, but your new fan can't go out to their car and stick a download card in their player to give it a listen in between sets. Having other things to sell – merch, posters, and stickers – is helpful too, but your CD is the main course on that meal ticket. • 70% of overall music industry revenues come from CD sales. You don’t want to cut off that much revenue potential. • You make more money selling CDs at gigs than selling downloads on somewhere like iTunes. A CD costs you between $.90 and $1.50 to manufacture. Sold at $15, that’s over $13 profit per unit. iTunes takes $2.99 per album, which leaves you with $7 per sale (assuming you are able to move any product via iTunes and don’t have to pay out any more of your money to a third vendor). • Major-label CD and DVD sales in 2007 added up to $15.7 billion. That’s an encouraging number, even if it is in decline. Download sales were at $2.9 billion. Download sales are increasing for sure, but they still pale in comparison overall. • CD Baby has seen an increase of 6% in sales since last year. Since 1997, CD Baby has sold over 5 million CDs. That’s easily over 400,000 per year on average, and growing. People are still buying independent releases. Any independent artist who tours knows that the majority of CDs they sell are sold from the stage. Think of it as a fan building and fan nurturing tool. It's one of those moments where a fan, or soon-to-be-a-fan, craves immediate gratification and a remembrance of the event. CDs are the best format for live sales. It’s an instant data transfer – you just hand over the disc. And even more than this being an “impulse” buy, it’s truly a matter of you creating a demand and being there to supply the goods immediately. As a matter of fact, you should consider the act of pitching your merch and CDs from the stage or your merch table as an invitation for your audience and fans to have a direct and personal interaction with you. There is an art to the pitch, and those who take the time to create an interesting approach sell more CDs and gather more mailing list names for future promotions. If your invitation to meet you at the merchandise table includes a drawing for a free CD, then your CD sales could go up 25%-50% and you'll collect nearly 100% of your audience's contact information. That's easy, low cost marketing that will pay off for years to come. Want to really personalize the experience? You can sign a CD. Try that with a download. 3. No connectivity required. In many ways, CDs are easier than downloads. Take them home, pop ‘em in your car’s CD player, a boom box at a party… CD players are everywhere. There’s no web connectivity necessary, no searching around a website – just plug and play. Plus, you can add bonus material, videos, and enhancements to make your CD an all-inclusive multimedia experience. If one thing is clear, the landscape in the music industry is changing. This is nothing new. We’ve been in business since 1946, and we’ve seen plenty of formats rise and fall. Digital downloads and transfers are clearly a model that you ought to pursue to their fullest extent, for both revenue and promotion, but CDs still represent the huge majority of revenue in the music industry. The fact is, some customers just don’t do downloads. You’ll lose a sale if you don’t have a CD for them. Even your grandmother knows how to use a CD. 4. Permanence (no crashing computers and lost data). Your music is virtually permanent on a CD. Hard drives crash and MP3 players die, it’s a sad fact of life. But if you have a disc with the content on it, your message or album is not lost. And of course, if you own a CD, you can easily rip MP3s for storage or use with your favorite media player and still have the disc as a backup and for use with your stereo, car, etc. Register here for The Complete Recording Guide5. A CD tells a story. The artwork that comprises your CD package allows you to further illustrate your album’s artistic statement. A great looking CD and your specific choice of packaging say something about you and can help you further connect with your listening audience. Plus, listeners experience the track sequence, pacing, and breadth of your work exactly as you intended. Singles certainly have their place and can spark interest in your act, but albums are the only way for you to create a thematic and sonic statement of where you as an artist are at the time the disc is recorded and released. Someone browsing through iTunes may skim right over your band’s name or song titles. But that same person, given the opportunity, might pick up your CD package based on the appeal of the artwork alone. CDs are a one-stop, self-contained collection that deliver music, graphics, artwork, lyrics, photos, and credits — all in a neat, compact package. Not to mention the fact that after spending months (or years) composing, refining, rehearsing, recording, mixing, and mastering, there’s a real sense of accomplishment in having something to physically embody the sweat, money, and tears that went into the work you’ve created. Digital files are a great way to deliver tunes, but nothing beats having a CD to represent the completion of your artistic efforts. 6. Shopping your music? CDs are the way to go. CDs remain the preferred format if you’re shopping your music for film, TV, multimedia, gaming, or licensing opportunities. An overwhelming number of music editors and journalists still prefer a physical CD and press kit when being pitched an emerging or even an established artist. Radio stations still utilize CDs in their selection of music for airplay. If you choose not to press CDs, your chances for success and exposure on the radio are virtually non-existent. Even a site like Pandora.com, an online music streaming service, requires music to be submitted on CD. And while many artists now feel no need to court major labels to achieve success, if you do want a label’s attention, CD sales are the most important metric they’ll consider. If you prove you can move product, you’ve got a chance at impressing a label. 7. CDs sound better than MP3s. CDs sound better than an MP3 download, because they’re not compressed like an MP3 file. 8. It makes a swell gift, too. Want to reward members of your fan club and street team? There’s no better way than giving them a limited-edition CD with music recorded and packaged especially for them. 9. What’s true for majors isn’t true for indies.The majors are selling fewer CDs, it’s true. Retail music CD sales are down anywhere from 9-14%. But you are not a major-label artist (at least not yet). Remember that the model for each is significantly different. To really sell downloads in significant quantities, you need people actively seeking your music to buy. This requires a large and established fan base, and/or a popular hit single, and/or a tremendous amount of money spent on promotion, and/or a significant buzz on the web. As an indie artist, you may not have any of these things yet, you’re still building your name and awareness about yourself and your music. Chances are you’re giving away songs through digital distribution to promote yourself. As an indie, you rely on hand-to-hand music sales, personal contact at gigs, something tangible you can hand to someone as soon as you’ve sparked an interest in your act. Nothing does that better than a CD. Have comments? Additional reasons you think CDs are important that weren’t mentioned here? Email us at FastForward@DiscMakers.com and we might add your input to the revised edition of this article. Special thanks to Gene Foley (Foley Entertainment, Inc.), David Hooper (MusicMarketing.com), Martin Folkman (Music Resource Group), Jeri Goldstein (PerformingBiz.com), and everyone who contributed ideas. =========== Folks, ideally DiskMakers should NOT have one of their ads appearing as an article/post here on DMusic/Boycott-Riaa's servers (as I am just now posting at this moment.) The THING I am trying to get across is for each of you who are reading this is to get a "feel/idea" of how "things are affecting" the commercial music market AND ARE BEING reflected in their communications at-large with the public. lol, You want something along the lines of what "I am trying to get at" even STRONGER? George (as always) is a step or 2 ahead of me in his "feel" for what is really going on in the world at-large. [small] yes leflaw, i know this shouldn't be here without a paycheck to you from DiscMakers, but they ARE one of the VERY FEW music service companies who make a living as (as far as I have seen/experienced/heard) only by HELPING (in exchange for $) the indie music community instead of exploiting them. DiscMakers has a fairly good track record and now own/involved with CdBaby. cut me a little slack. (lol) Andrea and I have just moved to Chattanooga and are training at new day-jobs taking up a WHOLE HELL of a lot of our time. What little free time I have had lately is little. I certainly WANT to write/find better articles, but give me time to settle in. [/small] gdZiemann Date: November 7, 2008 @ 4:09 PM I have to argue with almost every point in this article. 1. CDs legitimize you. That's what I thought 10 years ago, but this is simply bullshit. If it doesn't come from a major label, it is not considered legitimate. 2. CDs are an integral part of the indie revenue stream. How many indies are making money selling CDs? Not me, that's for sure. • 70% of overall music industry revenues come from CD sales. And they keep 85%. The majority of the artists' revenues come from touring. • You make more money selling CDs at gigs than selling downloads on somewhere like iTunes. A CD costs you between $.90 and $1.50 to manufacture. An mp3 costs nothing to manufacture. And you don't need to buy 500 or 1000 ahead of time. 3. No connectivity required. Uh, I'm staring at a pile of CDs. I click on them and nothing happens. They appear to require some sort of external device. 4. Permanence Unless you scratch it. Or perhaps this a reference to the boxes of unsold CDs in my closet. 5. A CD tells a story. Printing it costs a fortune. The insert is the most expensive component of the package and the same information can be conveyed for free -- and big enough to read -- on your web site. 6. Shopping your music?... If you choose not to press CDs, your chances for success and exposure on the radio are virtually non-existent. This does not change because you print CDs. 7. CDs sound better than MP3s. Many people say LPs sound better than CDs. That doesn't mean anyone is listening to them. 8. It makes a swell gift, too. Yeah. Your closest friends don't expect to pay for a copy anyway. 9. What’s true for majors isn’t true for indies. This is really the only part I wanted to actually argue about. The rest was knee-jerk sarcam. In the last 8 years, the majors' sales have dropped by more than 50 percent. And yet, they still have 85% of the market. If "what’s true for majors isn’t true for indies" why hasn't the indies' market share improved? Back under item 2... • Major-label CD and DVD sales in 2007 added up to $15.7 billion. According to the RIAA, in 2007, the total value of all physical products shipped (not necessarily sold) was slightly less than $7.5 billion . The greatest value the RIAA has ever claimed to have shipped was $14.6 billion -- in 1999. The $15.7 billion figure has no basis in reality. As this is an advertisement, this alone is enough to delete it and move on. [/i] • Since 1997, CD Baby has sold over 5 million CDs. That’s easily over 400,000 per year on average, and growing. People are still buying independent releases.[/i] Take a good look at those numbers. AC/DC sold a million copies of their new release in the past 2 weeks, and you can only get it at Walmart or Sam's Club. In 2 weeks, one band has outsold CDBaby's entire sales for 2 and a half years. 400,000 units a year sounds good, until you divide it out among the tens of thousands of acts and realize that if you're selling 2 copies a month, that's better than average. If you want CDs, do enough live shows to move them, or can find a retail location that will accept them, by all means, go ahead and have 1,000 pressed. But don't do it based this ad. gdZiemann Date: November 8, 2008 @ 1:07 PM Oh yeah, one other thing. If you use CD-Rs, eBay won't let anyone resell them. independentm... Date: November 9, 2008 @ 12:40 AM "Many people say LPs sound better than CDs. That doesn't mean anyone is listening to them." Change "anyone" to "anyone besides the true audiophiles and true fans" are listening to (buying) them... ...and we are in %100 agreement. George, the MUSIC ITSELF isn't what is being sold, it is the "keepsake" DISC that is the product. You and me have always both agreed that the MUSIC itself is FREE... .mp3's and transferable files on the interweb are JUST ads!!! I say, "vinyl BEATS/TRUMPS a CD" (when it REALLY matters) as far as if you are REALLY gonna give your fans a physical good in return for their dollars. ------------- No! Indies should NOT "give away CD's" at their shows. (Sure, they SHOULD put the music online for free!) But, the DISC (Cd or vinyl or WHATEVER) is the "keepsake". --------- Oh, nevermind me. I'm typing to fast and I only had 30 mins online tonight. gdZiemann Date: November 9, 2008 @ 2:43 PM No! Indies should NOT "give away CD's" at their shows. I didn't say that. The "give them to your friends for free" part was under "use them as gifts." I said if you have enough people at your shows, by all means, order CDs. George, the MUSIC ITSELF isn't what is being sold, it is the "keepsake" DISC that is the product. I've got every Led Zeppelin album ever made, except for the "Mothership" collection. I've never seen them live. They're not keepsakes. I just wanted the music. I've seen Emerson, Lake and Palmer, Pink Floyd and Yes several times each. Looking at their CD covers does not relive the experience. Playing the music does, though. My wife used to make Jerry Riopell's shirts back in the 70s. Her Jerry Riopelle albums are keepsakes, but it really doesn't have as much to do with the music as much as her connection to Riopelle. A CD is packaging. iTunes has sold 5 BILLION songs without it. Young people do not have the same reverence for it as previous generations. And rightly so. LPs used to come wrapped in works of art. CDs come with a 4-inch square of paper. autodidact Date: November 10, 2008 @ 12:39 AM A lot of audio books are now on CD-Rs, so eBay does sell them. I am not totally sure about this, but I think some low-production CDs on the Deutch Grammophon label are also CD-Rs. These are reissues from Europe. Sorry. Wish I knew details. I guess what I'm saying is that they're hypocrites. leflaw Date: November 10, 2008 @ 10:55 AM Cd's are like Business cards. Isn't a business acrd the stupidest thing you could imagine, yet everyone still uses them after 200 years. Like toilet paper. gdZiemann Date: November 10, 2008 @ 10:50 PM Cd's are like Business cards. You can get 500 CDs printed for $15? I'm really not against CDs. But I, personally, am not experiencing any great demand for them. The audience is happy with mp3s. My kid doesn't ask for CDs any more. She asks to buy songs. I'm just going with the flow on this. shadeswv Date: November 13, 2008 @ 10:56 PM As a collector, the package is as important as the music itself (for me anyway). But that same person, given the opportunity, might pick up your CD package based on the appeal of the artwork alone. Believe it or not, I actually did that one time. Back when Blockbuster had a music division, I routinely visited one of their stores, always looking through their large import section. I saw a few copies of a CD where the front cover depicted four guys in colored squares and after looking at it and the song titles, I bought it and was not sorry I did. They (Michael Learns To Rock, aka MLTR) have remained my favorite bands from Denmark. Here is that album, Colours (1993), which appears to have been curiously redone by EMI when they put out Blur's Best Of (2000) collection. Perhaps their are other similar covers by other bands, maybe even prior to 1993. Anyway, it got my attention for some reason. BTW, MLTR is now free of EMI and have control of their own material. LPs used to come wrapped in works of art. CDs come with a 4-inch square of paper. That's one of the reasons I collect from Japan, where CDs are also wrapped in works of art. When you get the reissued cardboard sleeves, they replicate the original textures, dyes and enclosures. It is also why places like EIL.com, CDJapan don't appear to be going out of business anytime soon. One thing I found interesting on EIL's site, is the way they described the now defunct 3" CD single from Japan: 3" CD SINGLE Snap-pack CD / Tanzaku sleeve The tanzaku is a long strip of paper. The compact disc is 3 inches, or 8 centimetres in certain areas, mostly in diameter, and mostly in Japan. We don't know precisely when the first one arrived. It could have been around 1984 [when big brother wasn't watching]. The last one made its royal appearance in 2004 as Queen's I Was Born To Love You. Twenty years is very young indeed for a mountain but quite old for a compact disc. That's why you don't see many of them around anymore. We call them 'snap-packs' [who knows what they call us] but in their native Japan they're called tanzaku. They are rather special and rather hard to find. Their unique design means you can choose to keep them intact, or 'snap' the pack and display one like a picture, if you get the picture. You can even play the CD when your downloads have crashed. Collecting has never been such fun... One of my favorite groups from the 70s/80s is the Alan Parsons Project. Some people may know the song "Sirius" as the theme song to the Chicago Bulls. However, if you have the Eye in the Sky album, you also know that that song does not completely end but fades out as the next song, "Eye In The Sky" begins. If you just purchase one of these songs via iTunes, you don't get the same effect. Granted, it may not be a big deal to most people, but certain albums are meant to tell a story and be heard as a whole. Are CDs going out of style? Maybe. Still, the Japanese market keeps plugging away at "new and improved" versions of the format. Now you have the Universal/JVC venture for the SHM-CD (Super High Material CD), Sony's Blu-Spec Disc (a Blu-Ray hybrid), and now it appears Victor has come out with the HQCD, for which I have not found info on yet, but I assume means High Quality CD. They may all be hype with no real difference detected by the average listener, but there you go. As for indie artists, they are certainly free to offer their music to fans in any way they see fit. Music is music and if fans are interested in an artist, they will obtain it. I may be a small segment of the market, but that's okay with me. This is just my 2 cents. RaidHHI Date: November 21, 2008 @ 10:08 PM My 2 cents... First, I like an actual cd, even if it is a cd-r, as long as it's cloned and/or mastered from the main disc the others would be made from. Second, if I can acquire a physical copy of your music, I will go that route before I ever consider paying to "download" it. If I like your music, or I want to hear it, offer up a cd for a reasonable price; 10-12 dollars or what have you, and I'll be happy to paypal it right to ya for the cd. MP3s are data after all, and we all know backups aren't always an everyday thing. So the cd is always nice to have in the event something "Terrible" happens.
http://news.dmusic.com/article/34892
crawl-002
refinedweb
4,213
74.19
Closed Bug 513032 Opened 12 years ago Closed 12 years ago Remove empty conditionals in our makefiles Categories (Firefox Build System :: General, defect) Tracking (Not tracked) People (Reporter: benjamin, Assigned: benjamin) References Details Attachments (1 file) After removing REQUIRES there were a bunch of conditional blocks that were empty. So I wrote a script which detects and tries to remove empty makefile conditionals. It's not foolproof and required some manual intervention, so I'm going to post the patch and request review on it. Attachment #397056 - Flags: review?(ted.mielczarek) Comment on attachment 397056 [details] [diff] [review] Remove empty makefile conditionals, rev. 1 You've got a few more random pymake changes here. Did you mean to include those? (Also remember you landed a few random ones in the original REQUIRES cleanup.) --- a/gfx/cairo/libpixman/src/Makefile.in -ifdef HAVE_ARM_NEON -# temporarily disabled to see if it fixes odd mobile build breakage -#USE_ARM_NEON_GCC=1 -endif Feels like we shouldn't just remove this, bug I also hate having stuff commented out living on forever. Looks fine otherwise. Attachment #397056 - Flags: review?(ted.mielczarek) → review+ Status: NEW → RESOLVED Closed: 12 years ago Resolution: --- → FIXED Ben, do you have the script you used to purge the empty conditionals still, the only script I saw around in a bug is the "Remove REQUIRES" one. No, I can't find it. Product: Core → Firefox Build System
https://bugzilla.mozilla.org/show_bug.cgi?id=513032
CC-MAIN-2021-04
refinedweb
233
55.74
Generate sprites from SVG icons © 2014 Thomas Khyn Python package to generate sprites from SVG icons This package takes its inspiration from Iconizr and grunt-iconizr by Joschi Kuphal. From Joschi’s words, Iconizr …: ... takes a folder of SVG images and creates a CSS icon kit out of them. Depending on the client's capabilities, icons are served as SVG / PNG sprite or embedded data URIs. iconizr creates suitable CSS / Sass / LESS etc. resources and a JavaScript loader for easy integration into your HTML documents. Not all the functionnalities from the original Iconizr are implemented yet, but most of them are here. Works on Python 2.6 and 2.7. As straighforward as it can be, using pip: pip install pyconizr Pyconizr will install all the required dependencies, except Cairo and librsvg and their python bindings, which are needed to generate PNG images. Please make sure they are installed in your environment if you want to use the pyconizr’s PNG functionalities. If you are on Windows, the quickest way to install them is to use either PyGTK (for python 2.6 and 2.7) or PyGI (for python 2.7). From the command line: pyconizr [options] From python: from pyconizr import iconize iconize(option1=val1, option2=val2, ...) This can be configured using the following options. All options should be prefixed by -- in the command line (if not using the shortcut), or should be provided as keyword arguments to the iconize function. All the options from scour, using the ‘scour-‘ prefix. For example, ‘strip-xml-prolog’ becomes ‘scour-strip-xml-prolog’. Defaults to best possible optimisation parameters for sprite generation. There are 2 command-line options added for scour parameters: scour-disable-comment-strippingPyconizr enables comment stripping by default. When using the iconize function, use enable_comment_stripping=False to disable this feature. From the command line you need to use --scour-disable-comment-stripping scour-verbosePyconizr runs scour in quiet mode by default. If you need to see scour’s non-error output, use quiet=False with the iconize function, or --scour-verbose from the command line. Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyconizr/
CC-MAIN-2017-26
refinedweb
365
66.94
Siteframe The Siteframe component provides a template for your app. It is meant to take up the whole page. It is comprised of a Header and Sidebar, both of which are optional. To include a Header and Sidebar, pass in headerProps and sidebarProps respectively. The sidebar is not meant to be scrollable. To make the content in the body scrollable, wrap it in a container that has overflow: auto (example below). To mark a sidebar link as active (highlighted with a green left border), set active: true on the link object given in primaryLinks or secondaryLinks. With Sidebar and scrollable content ReferenceError: React is not defined With Header and Sidebar ReferenceError: React is not defined Header with custom content ReferenceError: React is not defined Header only ReferenceError: React is not defined Props Siteframe Props Header Props Sidebar Props Imports Import React components (including CSS): import {Siteframe} from 'pivotal-ui/react/siteframe'; Import CSS only: import 'pivotal-ui/css/siteframe';
https://styleguide.pivotal.io/components/siteframe/
CC-MAIN-2021-04
refinedweb
159
52.09
Exercise 6 of chapter 7 is not so bad because the first two functions are given to us in the text. However, you do need to do some thinking as to how to reverse an array. There are a few ways to do this. One would be to implement an XOR type algorithm. Another would be to use reverse() from the std library, and then you can do it the way I have done it which is work through the array by starting on the ends. The way I have done it is one of the most common ways to solve this problem and is good to know as it is sometimes used as a skill assessment on programming interviews. See source below. Write a program that uses the following functions: Fill_array() takes as arguments the name of an array of double values and an array size. It prompts the user to enter double values to be entered in the array. It ceases taking input when the array is full or when the user enters non-numeric input, and it returns the actual number of entries. Show_array() takes as arguments the name of an array of double values and an array size and displays the contents of the array. Reverse_array() takes as arguments the name of an array of double values and an array size and reverses the order of the values stored in the array. The program should use these functions to fill an array, show the array, reverse the array, show the array, reverse all but the first and last elements of the array, and then show the array. #include <iostream> using namespace std; const int Max = 5; int fill_array(double ar[], int limit); void show_array(const double ar[], int n); void reverse_array(double ar[], int n); int main() { double properties[Max]; int size = fill_array(properties, Max); cout << endl; show_array(properties, size); cout << endl; reverse_array(properties, size); show_array(properties, size); cout << endl; reverse_array(properties + 1, size -2); show_array(properties, size); return 0; } int fill_array(double ar[], int limit) { double temp; int i; for(i = 0; i < limit; i++) { cout << "Enter value #" << (i + 1) << ": "; cin >> temp; if(!cin) { cin.clear(); while(cin.get() != '\n') continue; cout << "Bad input; input process terminated" << endl; break; } else if(temp < 0) break; ar[i] = temp; } return i; } void show_array(const double ar[], int n) { for (int i = 0; i < n; i++) { cout << "Property #" << (i + 1) << ": "; cout << ar[i] << endl; } } void reverse_array(double ar[], int n) { double temp; for(int i = 0; i < n/2; i++) { temp = ar[i]; ar[i] = ar[n - i - 1]; ar[n - i - 1] = temp; } } Note: for the sake of the solution. fill_array() could be stripped of the error handling code to make this easier to read.
https://rundata.wordpress.com/2013/08/28/c-primer-plus-chapter-7-exercise-6/
CC-MAIN-2017-26
refinedweb
458
61.09
This project is no longer under active development (as of Jan 2011) In recent years a lot of the large open source JavaScript libraries have come of age -- I can no longer justify the time it takes to develop and maintain a modern JavaScript library of comparable quality. It was good fun while it lasted, and educational. I intend to leave this project online as long as google permits. Feel free to plunder for code and ideas, a lot of good stuff in there. Jelly (JavaScript) provides a set of intuitively named utility functions and classes that you can build upon. It does not try to change JavaScript, but instead provides a more powerful and consistent toolset for working in the language. Jelly is designed around some core ideas: Read the QuickStart guide, for a short introduction on how the library works. Jelly has been tested to work in Internet Explorer 6/7/8, Firefox 3+, Opera 9+, Safari 3+ and Google Chrome Unobtrusiveness is a core goal of the Jelly library. There are no known namespace or functionality clashes with the main frameworks; jQuery, Mootools, YUI, Prototype js, ASP.NET Ajax etc. A growing number of plugins are now in the trunk, from date-pickers to drag and drop. Browse the source for latest updates. There is a small amount of documentation currently written inline; however, I plan to build on this over time. Any feedback, critiques or suggestions are welcome; send to pete (at) the-echoplex.net Unknown end tag for </td> Unknown end tag for </tr> Unknown end tag for </table>
http://code.google.com/p/jelly-javascript/
crawl-003
refinedweb
264
70.94
02 July 2009 13:11 [Source: ICIS news] LONDON (ICIS news)--Independent oil broker PVM Oil Futures Ltd suffered a loss of just under $10m due to unauthorised trading on 30 June, the company said in a statement on Thursday. The trades are believed to have caused a spike in ICE Brent futures in the early hours of trading in ?xml:namespace> The ICE Brent futures price on 30 June opened at $71.30/bbl and gained sharply to hit a high of $73.50/bbl, up $2.51/bbl from the previous close. By the time By midday in PVM said that due to a series of unauthorised trades, it held a substantial volume of futures contracts at the time, which were then closed. “When this was discovered, the positions were closed in an orderly fashion,” the company said. “PVM suffered a loss totalling a little under $10m.” PVM said it was conducting a full investigation into the unauthorised trading activity. PVM Oil Futures Ltd is one of the largest brokers on the Brent
http://www.icis.com/Articles/2009/07/02/9229705/brent-broker-pvm-oil-futures-suffers-10m-loss-on-rogue-trades.html
CC-MAIN-2015-11
refinedweb
175
73.07
Single Page App with Laravel and EmberJSBy Aleksander Koko REST App with Laravel and EmberJS - Build a New App with Laravel and EmberJS in Vagrant - Build a Database with Eloquent, Faker and Flysystem - Build REST Resources with Laravel - Single Page App with Laravel and EmberJS In this part, we will see how Ember works, how to use Ember Data and how to build something simple with it. Router, Route, Model, Template and Store are some of the concepts of Ember. I’m not going to explain every one of those, so if you feel stuck, use the documentation. As usual, you can download the code for this part here. Let’s code Note that while developing with Ember, it’s a good idea to download the Ember Inspector. They released Ember with a Chrome Extension and now that extension is also on Firefox. For this example, we are going to put every line of JS inside /public/static/app.js. In a real project, this is not a good idea. This simplifies our example but ask yourself – have you ever done some serious work with MVC architecture in just one big file? We saw how Laravel works: controllers are in one folder, each of them in one file, the configuration is in its own folder, the models too. I suggest you do the same thing with Ember when you dive into a proper project. The first thing you ever do when starting Ember is create the application. It is a global namespace for everything that you code with Ember. An Application can be created like this: App = Ember.Application.create(); I suggest activating a bit of debugging just by adding a line of code when you create the application. App = Ember.Application.create({ LOG_TRANSITIONS: true }); It doesn’t do much more than output your movement through the URLs and templates in the console. Also, we are going to use Ember Data which is a separate module of Ember and provides a nice integration with REST, translating everything from Store Object to request on the server. By default, Ember Data uses the Rest Adapter. You can also use the Fixture Adapter for testing and local development. Basically, Ember Data is a bridge between servers (Rest API) and local storage with the Store Object. As we saw earlier, our API uses a namespace. Ember’s Data comes with a Rest Adapter which accepts a namespace, a prefix like we saw on Laravel Route groups. Lets pass in our namespace as an argument. App.ApplicationAdapter = DS.RESTAdapter.extend({ namespace: 'api/v1' }); The adapter now requests all the data via example.com/api/v1/. Link the App Store with the Adapter and you are ready to start developing. App.Store = DS.Store.extend({ adapter: 'App.ApplicationAdapter' }); One of the main concepts of Ember is URL. Everything is built around that idea. The Router keeps the URLs and the templates synchronized. Inside the Router, you can define a resource and map that resource to a specific URL. In this example, we will work only with the photo resource and the user resource. Feel free to add the category resource and make some one to many relations with Ember. Don’t forget that earlier we created some relations (one-to-many and belongs-to) with Laravel, but we didn’t use them. Using one-to-many relations in Laravel is easy enough, but I don’t want to overwhelm you. If enough interest in generated in the comments, we’ll add this to our app in a followup post, along with pagination. The Router is the place where all the routes should be defined. Here, we defined two resources with their URLs. The URL is optional here. :photo_id is an argument. Let’s say that we navigate to example.com/photo/2. What would happen? We have a resource that passes our request to the model or controller, and there we grab some data from the Store. If the Store doesn’t find it, it looks on the server. :photo_id can be used to retrieve this data. In this case it looks for example.com/api/v1/photos/2. You see that photo is plural. Ember by itself looks for the plural of the resource. App.Router.map(function() { this.resource('photo', {path: "/photo/:photo_id"}); this.resource('user', {path: "/user/:user_id"}); }); A route begins with the first letter of the Resource capitalized and should be in the App namespace. Also, add the word “Route” after the resource’s name. So for the photo resource the route should be like this: App.PhotoRoute It should also extend the Route object. App.PhotoRoute = Ember.Route.extend({}); The Route Object can have different hooks for different things. Two of those hooks are used for defining the Controller name for that resource and defining the Model. Let’s stick with the model. App.PhotoRoute = Ember.Route.extend({ model: function(params){ return this.store.find('photo', params.photo_id); } }); Inside, we have specified the model hook and passed a parameter. Where does this parameter go? The photo resource has a url with a parameter: /photo/:photo_id. photo_id is stored in params and can be used inside the function. Don’t forget that every resource and every route has access to the Store. The Store object saves all the info inside it and uses Local Storage for better performance. That way, it cuts down the number of requests on the server. That’s why developing with Ember speeds up your application – in the end, the users are happier. By using store.find('resource') you can retrieve all the data for this resource from the store object. You can also retrieve only one row. For example, if you want to receive only a photo with a given id, use the store object and find the photo resource with the given id as a second parameter. return this.store.find('photo', params.photo_id); Ember searches for the data in example.com/api/v1/photo_id . By default, Ember works with the data by looking for ids. If you have inserted some relations for this resource, then you can also retrieve the data associated with it. That’s all the code for the routes, very similar for each case and straightforward: App.IndexRoute = Ember.Route.extend({ model: function(){ return this.store.find('photo'); } }); App.PhotoRoute = Ember.Route.extend({ model: function(params){ return this.store.find('photo', params.photo_id); } }); App.UserRoute = Ember.Route.extend({ model: function(params){ return this.store.find('user', params.user_id); } }); A quick note: the IndexRoute is a default Route, linked with the root URL. And by root I mean the example.com/ URL. There are other default Routes, like ApplicationRoute that executes as the application starts. The Model Object Inside Ember’s Model Object, you specify the data and its type of resource. A nice feature of Ember is that when the value of a resource is changed and another value depends on the changed value, it automatically gets updated via some observer magic. A model should start with a capitalized letter and should extend the Model Object. App.Photo = DS.Model.extend({}); Inside that Object you should specify all the fields and other values that depend on those core values. You can also add Relations inside the model. The photo model should look something like this: var attr = DS.attr; // This cuts my writting. Inside the model i use attr instead of DS.attr App.Photo = DS.Model.extend({ user_id: attr("number"), // The expected value is a number url: attr("string"), // The expected value is a string title: attr("string"), description: attr("string"), category: attr("number"), fullUrl: function(){ // Another value that depends on core values. return "/files/" + this.get("url"); }.property('url'), backgroundImage: function(){// This depends on another value but not on core ones return 'background: url("' + this.get("fullUrl") + '") no-repeat; '; }.property('fullUrl') }); With attr ( DS.attr) you specify how you want this data to arrive. For example, we want the user_id value to be a number. This way, we are secured from outside data. The User Model is similar. Remember, Ember Data will look for it in /api/v1/users. The naming convention is a bit tricky. For example, if you request a resource named user, Ember Data will look for example.com/prefix/users, and if you request a particular resource then it requests example.com/prefix/users/user_id. Knowing how Laravel exposes the data and how Ember wants its data can save you from headaches. App.User = DS.Model.extend({ name: attr("string"), lastname: attr("string"), username: attr("string"), fullname: function(){ return this.get('name') + " " + this.get('lastname'); }.property("name", "lastname") }); Views Before jumping into templates, I suggest using the Ember Inspector to view the state of your application. There you can find the Routes, Views and Controllers. You can also find the relations between the Controllers and Routes. Take some time to look around with the Inspector, it’ll be of great help later on when you develop your own Ember apps. Do you remember the first template we wrote in the third part? That’s the application template. That template will be rendered when example.com is accessed in the browser. You can’t develop the application further if you don’t make a modification inside that template. Replace <!-- The content will be here --> comment with: {{outlet}}. Why? All our resources are nested inside the application route. But if I look at my code I see no Index on the Router. Why is that? By default the example.com/ url is assigned to IndexRoute unless you’ve assigned that URL to another route. Ember puts the application onto the top level by default and everything is nested inside it. If you request a URL inside that application route, then by using {{outlet}} as a placeholder, Ember takes that route’s template and puts it inside that placeholder. Lets make another template and use it for the IndexRoute. This will be the first page. The first template is the app template. The index template will be rendered inside the application’s {{outlet}}. data-template-name is the name of the template. All the code inside that script tag will be placed inside the {{outlet}}. <script type="text/x-handlebars" data- <ul class="small-block-grid-1 medium-block-grid-2 large-block-grid-3 custom-grid-ul"> {{#each}} <li {{bind-attr {{#link-to 'photo' this}}<h5 class="custom-header">{{title}}</h5>{{/link-to}} <span>Author: {{user_id}}</span> </div> </li> {{/each}} </ul> </script> {{#each}} is something like a loop. If the model of the template has an array and we want to query for all the data, then we use this special tag. This loop starts with {{#each}} and ends with {{/each}}. Inside this loop, we use all the values that are returned from the loop. Remember that inside the model we returned the resource photo. The model retrieves the data from the Store and returns it to the template. Look at the Photo model. We specified some fields there and those fields are being used inside the template, inside the {{#each}} loop. Another special tag is the {{#link-to}} tag. This tag generates a link to the photo route and passes a parameter. The this parameter is the id of that object. In this case the photo id. Again, the {{#link-to}} tag ends with {{/link-to}}. {{title}} isn’t a special tag, it merely retrieves the title value for that object. Lets add the photo template. This template is the template for the Photo Route. Again, I suggest to see the naming conventions for how this is mapped and how the naming is done. <script type="text/x-handlebars" data- <div style="text-align: center;"> <h4>{{title}}</h4><br> <img {{bind-attr src="fullUrl" alt="title"}}><br> <span>Author: {{#link-to 'user' user_id}}{{author.name}}{{/link-to}}</span> </div> </script> By using the {{attribute-here}} tag, the selected attributes will be generated inside this tag. We have used it inside an <img> tag. Using {{title}} inside a tag as an attribute causes problems. Handlebars and Ember generate some extra objects inside the DOM. To solve this problem, we use {{bind-attr}} instead. When we make a link to the user route, we pass a parameter: the user_id. By clicking the link, the URL will be updated with example.com/user/the_id. But we don’t have a user template yet. Let’s create one. <script type="text/x-handlebars" data- <h2>Hello: {{fullname}} </h2> </script> This displays only the full name. fullname is a property of our App.User that extends DS.Model. Before wrapping it all up, I made a gif of how it looks: Wrapping up As you can see, this is not a completed project yet. A lot of work is still needed; go ahead and experiment with it, learn from it and change it. The full project will be hosted on my Github account and will be updated frequently. Any contribution is welcome, I’d love to work together. In this series we learned a lot – I learned a lot too. We saw how to work with the cloud, learned about its good sides and bad sides. We saw how we could develop an application in both environments and how to configure Laravel for different environments. We saw how to build a REST API with Laravel by staying on the same page of an application with Ember. I hope you all had as much fun as I have. What do you think? Do you want to see more on Heroku, Laravel or Ember? Leave a comment below, it’s always good to hear feedback from the readers!
https://www.sitepoint.com/single-page-app-laravel-emberjs/
CC-MAIN-2017-04
refinedweb
2,284
67.25
How to Extend the Axis2 Framework to Support JVM Based Scripting Languages - | - - - - - - Read later Reading List This article explains how to extend the Axis2 framework to support Java Virtual Machine (JVM) based scripting languages such as Jython, JRuby, etc. It provides a high level overview of the subject, covering some key concepts of Apache Axis2 and how it can be used to come up with an extension to a JVM based scripting language. After going through this article, a developer will be able to extend the Axis2 framework to support the JVM based scripting language of his or her choice. When the Axis2 framework is extended it is easy to: - Deploy a script as a web service. - Write a service client in the chosen scripting language. Apache Axis2 is an open source Web services engine. It is a complete re-design and re-write of the widely used Apache Axis SOAP stack. Axis2 not only provides the capability to add Web Service interfaces to Web applications, but can also function as a standalone server application. Apache Axis2 supports SOAP as well as widely popular REST style of Web services. You can expose the same business logic implementation as a WS-* style interface as well as a REST/POX style interface simultaneously using Axis2. The JVM was initially designed to support only the Java programming language. However, as time has gone on, more and more languages, including many scripting languages, have been ported to the platform. The JVM now supports a wide spectrum of scripting languages such as Jython, JRuby, ColdFusion, etc. For simplicity this article uses Jython as its scripting language, however the techniques described can equally be applied to other languages. Jython is an implementation of the Python programming language in Java. It is a programming hybrid, exhibiting the strengths of both Java and Python. Since Jython is written 100% in Java, scripts written using Jython will run on top of any compliant JVM and can use existing Java libraries as if they were Python modules. Web Service Implementation approaches Web services are a collection of technologies that can be used to build a Service Orientated Architecture (SOA). Although there seems to be a general confusion about the relationship between SOA and Web services, it is important to know that Web services are an implementation methodology that adopts standard protocols to execute SOA. There are two widely used techniques in Web service development, code first and contact first: With the code first approach, primary concern is given to the code; you start with the Java code, and the Web service contract (WSDL) is generated from it. In contrast Contract First emphasises the service contract; you start with the WSDL contract, and use Java or a code generation tool to implement said contract. The contract first approach has some advantages. It promotes: - Loose coupling between applications - Interoperability between multiple services - The use of Abstraction to hide underlying implementation details - Collaboration and agreement between all parties When considering the Code First approach, some of the advantages are that it: - Is simple and less time consuming - Can be used to expose legacy systems as Web services - Does not require an in depth knowledge on WSDL That said when designing a service contract, you can always choose to use either the code-first or contract-first techniques. Ultimately the decision relies on whether you care more about ensuring interoperability or improving productivity. This article will therefore show how to extend Apache Axis2 to support both techniques. Extending Axis2 Framework to support Code First Approach Axis2 contains a powerful XML based client API. This API can be used to develop Java service clients. Now our requirement is to write the service client in a scripting language, and we’ve chosen Jython for illustration purposes. To allow Jython to work with the Axis2 Client library we need to develop a wrapper library around Axis2’s Client API. The wrapper library is developed to create a layer of abstraction on top of an existing body of functionality. At this point we are redefining the Axis2 Client API's interface to accept Jython scripts. The above figure shows the architecture of the API. When your Jython client script is executed, a mapping Java service client is created and executed. Then a Web service call is made and the result is passed back to your client script. More information on the Axis2 service client API can be found here. When a SOAP message is being sent through the Client API, an Out Pipe activates. The Out Pipe will invoke the handlers and terminate with a Transport Sender that sends the SOAP message to the target endpoint. The SOAP message is received by a Transport Receiver, which reads the SOAP message and starts the In Pipe. The In Pipe consists of handlers and ends with the Jython Message Receiver, which consumes the SOAP message and hands it over to the application. The following code snippet shows Jython client which is calling a web service. from org.wso2.wsf.jython.client import WSClient from org.wso2.wsf.jython.client import WSFault from org.wso2.wsf.jython.client import WSMessage req_payload_string = "<webSearch><appid>ApacheRestDemo</appid><query>Sri Lanka</query><form/></webSearch>" LOG_FILE_NAME = "/home/heshan/IdeaProjects/MRclient/src/jython_yahoo.log" END_POINT = "" try client = WSClient({ "to" : END_POINT, "http_method" : "GET", "use_soap" : "false"}, LOG_FILE_NAME) req_message = WSMessage(req_payload_string, {}) print " Sending OM : " , req_payload_string res_message = client.request(req_message) print " Response Message: " , res_message except WSFault, e: e.printStackTrace(); Extending the Apache Axis2 to support the Contract First Approach Axis2 code generator For code generation, Axis2 includes a code generation module which is called the Axis2 Code Generator. The code generator allows multiple data-binding frameworks to be incorporated and is easily extendable. Therefore the code generation tool can be extended to support a scripting language. Before diving into details on how to extend the tool let's have a look at the Axis2 Code Generator. When you consider a SOAP processing engine, one of the key value additions will be code generation based on WSDL. It: - Offers User convenience - A code generation tool helps users to use the framework in an easy and efficient way. - Makes use of the framework to it's full potential. Now let's have a look at the architecture of Axis2 Code generator. The tool's architecture is pretty straightforward. The core processes the WSDL and builds an object model. Then the built object model is parsed against the templates to build the source code. Extending the Axis2 code generator to support scripting languages The code generation engine calls the extensions one by one finally calling a component known as the Emitter. The Emitter is the actual component that does the significant bit of work in the code generation process. Emitters are usually language dependent and hence one language has one emitter associated with it. Therefore there should be an emitter to support Jython code generation. This simple yet powerful architecture is shown in the above illustration. The Emitter processes the WSDL and builds an object model. The object model is simply an XML file which contains the object model for the WSDL with respect to the Axis2 information model (ie. axis service, axis operation, axis message, etc). The template is an XSLT file which contains information on how the code should be generated. Finally the built object model is parsed against the template to build the Jython source code. In order to support the Contract First approach you need to generate a skeleton and a message receiver for your service. The generic message receiver that we have already written will not work because it works only on a limited schema structure. We can use the existing infrastructure in Axis2 to do this. Axis2 creates an intermediate XML structure representing a WSDL and we have to run 2 XSLTs on that to create the skeleton class and the message receiver. With the help of these XSLTs and the code generation tool we can support Contract First Web services in Jython. At the end, the message receiver and the skeleton can be used to write a service client in Jython. Server side This section will discuss how to expose your business logic as a Web service. The solution to the requirement of exposing a Jython Web service in Axis2 lies within the pluggable deployer concept of Axis2. In order to expose services written in Jython, we will be writing a custom deployer together with a Jython message receiver. The message receiver consumes SOAP messages and hands them over to applications. The message receiver is the last handler in the in-pipe. For more information on Message Receivers and Axis2 Architecture, please refer to the documentation. The deployer needs to map Jython data types to XML schema data types. This process is known as data binding. Thereafter, with the help of data binding and method annotations, an XML Schema is generated for the Jython service. Next, both the generated XML schema, and the meta-data pertaining to an AxisService, are given to the Axis2 engine. Axis2 engine will create the WSDL out of it and your Jython service will be exposed as a Web service. If you are interested in learning more on deployers, I suggest you take a look at the article on Axis2 deployment - Custom deployers. The figure above shows the architecture of this solution. The incoming SOAP message is received by the Transport Listener and it is passed through the handler chain. Then it is given to the Jython Message Receiver, which traverses through the Axis Object Model (AXIOM) structure and retrieves the relevant information. This retrieved information is passed in to the Jython service. Then the Jython service gets executed and the resulting object is passed back in to the Jython Message Receiver. In the Jython Message Receiver an AXIOM structure is created out of the returned Jython object. Then the response is sent through the handler chain to the Transport Sender. The Transport Sender sends the response to the client. The process explained above takes place for each and every SOAP message that is exchanged. How a Jython service is deployed At the deployment time the annotations of the Jython script are read. Then the mapping of the dynamic Jython types to static Java types is done. This process is called data binding. After corresponding matching types are mapped, an XML schema is created out of the service. The following steps describe how the XML schema is generated out of the Jython service: - Annotations of the Jython service are read. - An AxisService is created for the Jython service. - An AxisOperation is created for every Jython method. - An AxisMessage is added to the operation. It contains the types of the method parameters. - Each and every AxisOperation is added to the AxisService. - Finally the XML Schema for the Jython message is generated. The generated AxisService is given to Axis2 engine. Finally the Axis2 engine generates the WSDL out of it. Conclusion Apache Axis2 can be extended in a such a way that it will support JVM based scripting languages. After the extensions are made, a user can expose services and write service clients using the JVM scripting language extension. About the author Heshan Suriyaarachchi is a Software Engineer at WSO2 Inc and a member of the WSO2 Enterprise Service Bus (ESB) team. Heshan has experience in Web services, SOA, middleware and Distributed Systems. He enjoys playing Basketball and contributing towards open source projects in his spare time. Rate this Article - Editor Review - Chief Editor Action Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Tell us what you think Jython in another project: RESTx by Juergen Brendel RESTx is still a young project, but it already has a lot of functionality. Most importantly, it is really simple to get started and use it: Convention over configuration and it all works out of the box. Jython is a great tool and allows for the seamless creation of multi-language objects, which is actually really nice. We like it, because it gives RESTx component developers a choice to use the language that is right for the job.
https://www.infoq.com/articles/axis2_scripting
CC-MAIN-2017-22
refinedweb
2,046
54.42
> -------- Original Message -------- > Subject: Re: [Webware-discuss] MakeAppWorkDir > From: "Hannes Lilljequist" <hannes@...> > Date: Thu, September 09, 2004 1:44 am > To: "jose" <jose@...> > Cc: "<webware-discuss@...>" > <webware-discuss@...>, "'Eric Radman'" > <theman@...> > > I have two questions related to this. (Hope you don't mind if I borrow > your thread...) > > 1. > Say you're serving a number of sites on one server and you need to add > and remove sites as you go along. You've decided that you want one > server instance for each webapp... Is there a way of centralising the > starting and stopping of the appservers? Or do you have to have one > system startup script (as in init.d) for each server instance? > > It seems like with this approach, there's a lot of stuff you need to > keep track of for each new site: Apache virtual host, Appserver port > number, Webkit workdir, startup script (rc/init.d)... Well, maybe > that's just me being lazy - but the multiple-startup-script part seems > awkward to me. > I agree totally, it is messy and can be difficult to keep track of. So far I have set up only two appservers, one is my main one and the second one is a dev one, but I could see how this can get out of hand quickly. Currently at work we are operating under the model where we have a single appserver with muyltiple contexs in it, I am in the process of making each context (or related group of contexes) its own appserver. At work I think I am going to end up with about 4 or 5 app servers running, so there is going to be alot to keep track of. > > 2. > If you, on the other hand, would place multiple sites under one server > instance, the lib directory could easily become crowded, and there may > also be namespace conflicts... Is there a way to have one lib directory > for each context (other than making subdirectories of lib of course)? > Should i NOT use lib under these circumstances? > > (For example, I wanted to use the wiki app from wiki.w4py.org. It's > designed to be run as a separate webapp, but I thought maybe I could > install it as a context side-by-side with my main site context. I > looked in the lib directory of the wiki and thought: not a good idea.) > > On a side note have you gotton the wiki running? I tried to get it running last night and never got it to run. I am trying to get it running on a windoze box, but I don't think that should cause a problem. Anyway I would really love to hear your experience getting the wiki running Jose > Sorry for the long post. Quite obviously, I'm also a fresh new user of > Webware and Python. Hope someone out there can give me a few hints! > > Regards, > Hannes Lilljequist > > > > 2004-09-09 kl. 07.05 skrev jose: > > > Dear Eric, > > > > Thanks for the confirmation, that's what I thought was going on, I just > > wanted to be sure > > > > Jose > > > > -----Original Message----- > > From: webware-discuss-admin@... > > [mailto:webware-discuss-admin@...] On Behalf Of Eric > > Radman > > Sent: Wednesday, September 08, 2004 6:44 PM > > To: jose@... > > Cc: webware-discuss@... > > Subject: Re: [Webware-discuss] MakeAppWorkDir > > > > > > On 16:38 Wed 08 Sep , jose@... wrote: > >>). > > > > This is also better from a security perspective. > > > >> My question is, in order for all the application servers to run on the > > > >> same machine, I need to have each one listening on a different port > >> correct? > > > > Correct. > > > >> And if so how does this work in a multiuser Linux environment where > >> another user might be running webware on the same port that I just set > > > >> mine to? > > > > This works because once a user's instance of WebKit (or any other app) > > listening on a specific port it's locked, so other users only get an > > error when they try to step on it. If another user knows that you > > stopped your instance of WebKit then they can steal you're port. > > > > -- > > Eric Radman | > > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by BEA Weblogic Workshop > FREE Java Enterprise J2EE developer tools! > Get your free copy of BEA WebLogic Workshop 8.1 today. > > _______________________________________________ > Webware-discuss mailing list > Webware-discuss@... >
http://sourceforge.net/p/webware/mailman/webware-discuss/thread/20040909183719.12833.qmail@webmail07.mesa1.secureserver.net/
CC-MAIN-2015-40
refinedweb
711
71.85
When you’re working on OpenStack, you’ll probably hear a lot of references to ‘async I/O’ and how eventlet is the library we use for this in OpenStack. But, well … what exactly is this mysterious ‘asynchronous I/O’ thing? The first thing to think about is what happens when a process calls a system call like write(). If there’s room in the write buffer, then the data gets copied into kernel space and the system call returns immediately. But if there isn’t room in the write buffer, what happens then? The default behaviour is that the kernel will put the process to sleep until there is room available. In the case of sockets and pipes, space in the buffer usually becomes available when the other side reads the data you’ve sent. The trouble with this is that we usually would prefer the process to be doing something useful while waiting for space to become available, rather than just sleeping. Maybe this is an API server and there are new connections waiting to be accepted. How can we process those new connections rather than sleeping? One answer is to use multiple threads or processes – maybe it doesn’t matter if a single thread or process is blocked on some I/O if you have lots of other threads or processes doing work in parallel. But, actually, the most common answer is to use non-blocking I/O operations. The idea is that rather than having the kernel put the process to sleep when no space is available in the write buffer, the kernel should just return a “try again later” error. We then using the select() system call to find out when space has become available and the file is writable again. Below are a number of examples of how to implement a non-blocking write. For each example, you can run a simple socket server on a remote machine to test against: $> ssh -L 1234:localhost:1234 some.remote.host 'ncat -l 1234 | dd of=/dev/null' The way this works is that the client connects to port 1234 on the local machine, the connection is forwarded over SSH to port 1234 on some.remote.host where ncat reads the input, writes the output over a pipe to dd which, in turn, writes the output to /dev/null. I use dd to give us some information about how much data was received when the connection closes. Using a distant some.remote.host will help illustrate the blocking behaviour because data clearly can’t be transferred as quickly as the client can copy it into the kernel. Blocking I/O To start with, let’s look at the example of using straightforward blocking I/O: import socket sock = socket.socket() sock.connect(('localhost', 1234)) sock.send('foo\n' * 10 * 1024 * 1024) This is really nice and straightforward, but the point is that this process will spend a tonne of time sleeping while the send() method completes transferring all of the data. Non-Blocking I/O In order to avoid this blocking behaviour, we can set the socket to non-blocking and use select() to find out when the socket is writable: import errno import select import socket sock = socket.socket() sock.connect(('localhost', 1234)) sock.setblocking(0) buf = buffer('foo\n' * 10 * 1024 * 1024) print "starting" while len(buf): try: buf = buf[sock.send(buf):] except socket.error, e: if e.errno != errno.EAGAIN: raise e print "blocking with", len(buf), "remaining" select.select([], [sock], []) print "unblocked" print "finished" As you can see, when send() returns an EAGAIN error, we call select() and will sleep until the socket is writable. This is a basic example of an event loop. It’s obviously a loop, but the “event” part refers to our waiting on the “socket is writable” event. This example doesn’t look terribly useful because we’re still spending the same amount of time sleeping but we could in fact be doing useful rather than sleeping in select(). For example, if we had a listening socket, we could also pass it to select() and select() would tell us when a new connection is available. That way we could easily alternate between handling new connections and writing data to our socket. To prove this “do something useful while we’re waiting” idea, how about we add a little busy loop to the I/O loop: if e.errno != errno.EAGAIN: raise e i = 0 while i < 5000000: i += 1 print "blocking with", len(buf), "remaining" select.select([], [sock], [], 0) print "unblocked" The difference is we’ve passed a timeout of zero to select() – this means select() never actually block – and any time send() would have blocked, we do a bunch of computation in user-space. If we run this using the ‘time’ command you’ll see something like: $> time python ./test-nonblocking-write.py starting blocking with 8028160 remaining unblocked blocking with 5259264 remaining unblocked blocking with 4456448 remaining unblocked blocking with 3915776 remaining unblocked blocking with 3768320 remaining unblocked blocking with 3768320 remaining unblocked blocking with 3670016 remaining unblocked blocking with 3670016 remaining ... real 0m10.901s user 0m10.465s sys 0m0.016s The fact that there’s very little difference between the ‘real’ and ‘user’ times means we spent very little time sleeping. We can also see that sometimes we get to run the busy loop multiple times while waiting for the socket to become writable. Eventlet Ok, so how about eventlet? Presumably eventlet makes it a lot easier to implement non-blocking I/O than the above example? Here’s what it looks like with eventlet: from eventlet.green import socket sock = socket.socket() sock.connect(('localhost', 1234)) sock.send('foo\n' * 10 * 1024 * 1024) Yes, that does look very like the first example. What has happened here is that by creating the socket using eventlet.green.socket.socket() we have put the socket into non-blocking mode and when the write to the socket blocks, eventlet will schedule any other work that might be pending. Hitting Ctrl-C while this is running is actually pretty instructive: $> python test-eventlet-write.py ^CTraceback (most recent call last): File "test-eventlet-write.py", line 6, in sock.send('foo\n' * 10 * 1024 * 1024) File ".../eventlet/greenio.py", line 289, in send timeout_exc=socket.timeout("timed out")) File ".../eventlet/hubs/__init__.py", line 121, in trampoline return hub.switch() File ".../eventlet/hubs/hub.py", line 187, in switch return self.greenlet.switch() File ".../eventlet/hubs/hub.py", line 236, in run self.wait(sleep_time) File ".../eventlet/hubs/poll.py", line 84, in wait presult = self.do_poll(seconds) File ".../eventlet/hubs/epolls.py", line 61, in do_poll return self.poll.poll(seconds) KeyboardInterrupt Yes, indeed, there’s a whole lot going on behind that innocuous looking send() call. You see mention of a ‘hub’ which is eventlet’s name for an event loop. You also see this trampoline() call which means “put the current code to sleep until the socket is writable”. And, there at the very end, we’re still sleeping in a call to poll() which is basically the same thing as select(). To show the example of doing some “useful” work rather than sleeping all the time we run a busy loop greenthread: import eventlet from eventlet.green import socket def busy_loop(): while True: i = 0 while i < 5000000: i += 1 print "yielding" eventlet.sleep() eventlet.spawn(busy_loop) sock = socket.socket() sock.connect(('localhost', 1234)) sock.send('foo\n' * 10 * 1024 * 1024) Now every time the socket isn’t writable, we switch to the busy_loop() greenthread and do some work. Greenthreads must cooperatively yield to one another so we call eventlet.sleep() in busy_loop() to once again poll the socket to see if its writable. Again, if we use the ‘time’ command to run this: $> time python ./test-eventlet-write.py yielding yielding yielding ... real 0m5.386s user 0m5.081s sys 0m0.088s you can see we’re spending very little time sleeping. (As an aside, I was going to take a look at gevent, but it doesn’t seem fundamentally different from eventlet. Am I wrong?) Twisted Long, long ago, in times of old, Nova switched from twisted to eventlet so it makes sense to take a quick look at twisted: from twisted.internet import protocol from twisted.internet import reactor class Test(protocol.Protocol): def connectionMade(self): self.transport.write('foo\n' * 2 * 1024 * 1024) class TestClientFactory(protocol.ClientFactory): def buildProtocol(self, addr): return Test() reactor.connectTCP('localhost', 1234, TestClientFactory()) reactor.run() What complicates the example most is twisted protocol abstraction which we need to use simply to write to the socket. The ‘reactor’ abstraction is simply twisted’s name for an event loop. So, we create a on-blocking socket, block in the event loop (using e.g. select()) until the connection completes and then write to the socket. The transport.write() call will actually queue a writer in the reactor, return immediately and whenever the socket is writable, the writer will continue its work. To show how you can run something in parallel, here’s how to run some code in a deferred callback: def busy_loop(): i = 0 while i < 5000000: i += 1 reactor.callLater(0, busy_loop) reactor.connectTCP(...) reactor.callLater(0, busy_loop) reactor.run() I’m using a timeout of zero here and it shows up a weakness in both twisted and eventlet – we want this busy_loop() code to only run when the socket isn’t writeable. In other words, we want the task to have a lower priority than the writer task. In both twisted and eventlet, the timed tasks are run before the I/O tasks and there is no way to add a task which is only run if there are no runnable I/O tasks. GLib My introduction to async I/O was back when I was working on GNOME (beginning with GNOME’s CORBA ORB, called ORBit) so I can’t help comparing the above abstractions to GLib’s main loop. Here’s some equivalent code: /* build with gcc -g -O0 -Wall $(pkg-config --libs --cflags glib-2.0) test-glib-write.c -o test-glib-write */ #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <string.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <glib.h> GMainLoop *main_loop = NULL; static gchar *strv[10 * 1024 * 1024]; static gchar *data = NULL; int remaining = -1; static gboolean socket_writable(GIOChannel *source, GIOCondition condition, gpointer user_data) { int fd, sent; fd = g_io_channel_unix_get_fd(source); do { sent = write(fd, data, remaining); if (sent == -1) { if (errno != EAGAIN) { fprintf(stderr, "Write error: %s\n", strerror(errno)); goto finished; } return TRUE; } data = &data[sent]; remaining -= sent; } while (sent > 0 && remaining > 0); if (remaining <= 0) goto finished; return TRUE; finished: g_main_loop_quit(main_loop); return FALSE; } static gboolean busy_loop(gpointer data) { int i = 0; while (i < 5000000) i += 1; return TRUE; } int main(int argc, char **argv) { GIOChannel *io_channel; guint io_watch; int fd; struct sockaddr_in addr; int i; gchar *to_free; for (i = 0; i < G_N_ELEMENTS(strv)-1; i++) strv[i] = "foo\n"; strv[G_N_ELEMENTS(strv)-1] = NULL; data = to_free = g_strjoinv(NULL, strv); remaining = strlen(data); fd = socket(AF_INET, SOCK_STREAM, 0); memset(&addr, 0, sizeof(struct sockaddr_in)); addr.sin_family = AF_INET; addr.sin_port = htons(1234); addr.sin_addr.s_addr = htonl(INADDR_LOOPBACK); if (connect(fd, (struct sockaddr *)&addr, sizeof(addr)) == -1) { fprintf(stderr, "Error connecting to server: %s\n", strerror(errno)); return 1; } fcntl(fd, F_SETFL, O_NONBLOCK); io_channel = g_io_channel_unix_new(fd); io_watch = g_io_add_watch(io_channel, G_IO_OUT, (GIOFunc)socket_writable, GINT_TO_POINTER(fd)); g_idle_add(busy_loop, NULL); main_loop = g_main_loop_new(NULL, FALSE); g_main_loop_run(main_loop); g_main_loop_unref(main_loop); g_source_remove(io_watch); g_io_channel_unref(io_channel); close(fd); g_free(to_free); return 0; } Here I create a non-blocking socket, set up an ‘I/O watch’ to tell me when the socket is writable and, when it is, I keep blasting data into the socket until I get an EAGAIN. This is the point at which write() would block if it was a blocking socket and I return TRUE from the callback to say “call me again when the socket is writable”. Only when I’ve finished writing all of the data do I return FALSE and quit the main loop causing the g_main_loop_run() call to return. The point about task priorities is illustrated nicely here. GLib does have the concept of priorities and has a “idle callback” facility you can use to run some code when no higher priority task is waiting to run. In this case, the busy_loop() function will *only* run when the socket is not writable. Tulip There’s a lot of talk lately about Guido’s Asynchronous IO Support Rebooted (PEP3156) efforts so, of course, we’ve got to have a look at that. One interesting aspect of this effort is that it aims to support both the coroutine and callbacks style programming models. We’ll try out both models below. Tulip, of course, has an event loop, time-based callbacks, I/O callbacks and I/O helper functions. We can build a simple variant of our non-blocking I/O example above using tulip’s event loop and I/O callback: import errno import select import socket import tulip sock = socket.socket() sock.connect(('localhost', 1234)) sock.setblocking(0) buf = memoryview(str.encode('foo\n' * 2 * 1024 * 1024)) def do_write(): global buf while True: try: buf = buf[sock.send(buf):] except socket.error as e: if e.errno != errno.EAGAIN: raise e return def busy_loop(): i = 0 while i < 5000000: i += 1 event_loop.call_soon(busy_loop) event_loop = tulip.get_event_loop() event_loop.add_writer(sock, do_write) event_loop.call_soon(busy_loop) event_loop.run_forever() We can go a step further and use tulip’s Protocol abstraction and connection helper: import errno import select import socket import tulip class Protocol(tulip.Protocol): buf = b'foo\n' * 10 * 1024 * 1024 def connection_made(self, transport): event_loop.call_soon(busy_loop) transport.write(self.buf) transport.close() def connection_lost(self, exc): event_loop.stop() def busy_loop(): i = 0 while i < 5000000: i += 1 event_loop.call_soon(busy_loop) event_loop = tulip.get_event_loop() tulip.Task(event_loop.create_connection(Protocol, 'localhost', 1234)) event_loop.run_forever() This is pretty similar to the twisted example and shows up yet another example of the lack of task prioritization being an issue. If we added the busy loop to the event loop before the connection completed, the scheduler would run the busy loop every time the connection task yields. Coroutines, Generators and Subgenerators Under the hood, tulip depends heavily on generators to implement coroutines. It’s worth digging into that concept a bit to understand what’s going on. Firstly, remind yourself how a generator works: def gen(): i = 0 while i < 2: print(i) yield i += 1 i = gen() print("yo!") next(i) print("hello!") next(i) print("bye!") try: next(i) except StopIteration: print("stopped") This will print: yo! 0 hello! 1 bye! stopped Now imagine a generator function which writes to a non-blocking socket and calls yield every time the write would block. You have the beginnings of coroutine based async I/O. To flesh out the idea, here’s our familiar example with some generator based infrastructure around it: import collections import errno import select import socket sock = socket.socket() sock.connect(('localhost', 1234)) sock.setblocking(0) def busy_loop(): while True: i = 0 while i < 5000000: i += 1 yield def write(): buf = memoryview(b'foo\n' * 2 * 1024 * 1024) while len(buf): try: buf = buf[sock.send(buf):] except socket.error as e: if e.errno != errno.EAGAIN: raise e yield quit() Task = collections.namedtuple('Task', ['generator', 'wfd', 'idle']) tasks = [ Task(busy_loop(), wfd=None, idle=True), Task(write(), wfd=sock, idle=False) ] running = True def quit(): global running running = False while running: finished = [] for n, t in enumerate(tasks): try: next(t.generator) except StopIteration: finished.append(n) map(tasks.pop, finished) wfds = [t.wfd for t in tasks if t.wfd] timeout = 0 if [t for t in tasks if t.idle] else None select.select([], wfds, [], timeout) You can see how the generator-based write() and busy_loop() coroutines are cooperatively yielding to one another just like greenthreads in eventlet would do. But, there’s a pretty fundamental flaw here – if we wanted to refactor the code above to re-use that write() method to e.g. call it multiple times with different input, we’d need to do something like: def write_stuff(): for i in write(b'foo' * 10 * 1024 * 1024): yield for i in write(b'bar' * 10 * 1024 * 1024): yield but that’s pretty darn nasty! Well, that’s the whole idea behind Syntax for Delegating to a Subgenerator (PEP380). Since python 3.3, a generator can now yield to another generator using the ‘yield from’ syntax. This allows us to do: ... def write(data): buf = memoryview(data) while len(buf): try: buf = buf[sock.send(buf):] except socket.error as e: if e.errno != errno.EAGAIN: raise e yield def write_stuff(): yield from write(b'foo\n' * 2 * 1024 * 1024) yield from write(b'bar\n' * 2 * 1024 * 1024) quit() Task = collections.namedtuple('Task', ['generator', 'wfd', 'idle']) tasks = [ Task(busy_loop(), wfd=None, idle=True), Task(write_stuff(), wfd=sock, idle=False) ] ... Conclusions? Yeah, this is the point where I’ve figured out what we should do in OpenStack. Or not. I really like the explicit nature of Tulip’s model – for each async task, you explicitly decide whether to block the current coroutine on its completion (or put another way, yield to another coroutine until the task has completed) or you register a callback to be notified of the tasks completion. I’d much prefer this to rather cavalier “don’t worry your little head” approach of hiding the async nature of what’s going on. However, the prospect of porting something like Nova to this model is more than a little dauting. If you think about the call stack of an REST API request being handled and ultimately doing an rpc.cast() and that the entire call stack would need to be ported to ‘yield from’ in order for us to yield and handle another API request while waiting for the result of rpc.cast() …. as I said, daunting. What I’m most interested in is how to design our new messaging API to be able to support any and all of these models in future. I haven’t quite figured that out either, but it feels pretty doable. Your GLib example is unneessarily ugly; since you worked on GLib, it now has much higher level APIs for both asynchronous operations and sockets. Here’s some GJS (JavaScript binding) code I had for connecting to a Unix domain socket: // Set up a unix domain socket let address = Gio.UnixSocketAddress.new_with_type(“/path/to/socket”, Gio.UnixSocketAddressType.PATH); let socketClient = new Gio.SocketClient(); let conn = socketClient.connect(address, null); let output = conn.get_output_stream(); // Start an async write output.write_bytes_async(new GLib.Bytes(“some UTF-8 data”), GLib.PRIORTY_DEFAULT, null, function(result, error) { // Called when write completes }); This is also accessible via pygobject of course. Thanks Colin, sounds interesting … it’s been a while for me alright in tulip you can use event loop’s sock_sendall() instead of: yield from write(b’foo\n’ * 2 * 1024 * 1024) you can use: yield from tulip.get_event_loop().sock_sendall(sock, b’foo\n’ * 2 * 1024 * 1024) Yep, thanks. That example is actually trying to show some of the mechanics of coroutines so it purposefully doesn’t use tulip itself Small nit but the generator example prints out: yo! 0 hello! 1 bye! stopped Very nice article otherwise! Ah, thanks – last minute change to the example code and I forgot to update the output You might be interested to know that we have moved to using the co-routine model in Heat (including a hack to allow us to have something like “yield from” in Python 2) to orchestrate the creation of resources in parallel. We haven’t attempted to use it for non-blocking I/O, however. We’re still using eventlet to multiplex between different requests for now. Personally I would like to eventually move to just using multiprocessing for this; the time it takes to fork is negligible in the context of spinning up an entire stack. Interesting stuff Zane – got a pointer to what part of Heat you mean. At a glance, I’m not totally sure I see it. But yeah, on your other point – it’s definitely not a one-size-fits-all thing. The thing I’d be worried about with multiprocessing is duplicating DB and message broker connections – won’t you need to re-open those for each request? The main code for driving it is here: It’s pretty similar to the Task stuff in Tulip. Once the code is a bit more mature (i.e. we adapt it for more complex use cases and fix some bugs) I can imagine it ending up in Oslo if there are other potential uses for it. It’s true that using multiprocessing would mean reopening the DB connection (I don’t think we actually need the message broker again during these operations), but stack create/update/delete operations are so long running and require so many other connections that I’m not sure it would matter much. I haven’t looked into it though, and if it does turn out to be feasible it’s probably not widely applicable. Interesting post, thanks. I wonder if you have concrete example where task prioritizations can be useful though. In particular, I wonder if Python would be capable to handle something like that properly anyway, considering that when your lowest priority task run, none of the higher priority tasks are running. Thomas – an example might be a task to clear out a cache. While handling an API request, you might decide the cache has grown too large and needs clearing out, so you schedule an idle task to do that and you only want it to run when no API requests are being processed. The idle task might yield to the scheduler after a short amount of time if it didn’t finish the task. Do these support having multiple threads? If so, throw your long-running/non-blocking processes into another thread. My experience with async I/O is using boost::asio. When combined with boost::shared_ptr you can get some really nice looking code where actors go away automatically when all references to them are gone. With boost::exception, exceptions can be sent to callback functions that run in another thread. It has a concept of strands for multi-threading support that’s an interesting way of doing mutual-exclusion without blocking mutexes. […] Async I/O and Python […] [WORDPRESS HASHCASH] The comment’s server IP (67.192.46.11) doesn’t match the comment’s URL host IP (174.143.194.225) and so is spam. For anyone interested, here’s some slides for a talk based on this blog: Thanks, an interesting and informative read and a big, worthy effort to put all this together! Thanks Jesse – good luck with your talk
http://blogs.gnome.org/markmc/2013/06/04/async-io-and-python/
CC-MAIN-2014-52
refinedweb
3,848
64.61
Sizeof-Ex12:Size&Arrays in C The main objective of this exercise is to learn about sizeof and how it relates to arrays. C strings are just arrays of bytes. Sizeof is a unirary operator and C developers are obsessed about size of everything. In C memory size is key and you have to be keen on learning how much space is being taken up by stuff. See the following code snippet: #include <stdio.h> int main(int argc, char *argv[]) { int areas[] = {10, 12, 13, 14, 20}; char name[] = "Zed"; char full_name[] = {'Z','e','d',' ','A','.','S','h','a','w','\0'}; //Warning: On some systems you may have to change the %ld to %u In the above code I define the arrays which am going to use. Mainly work with int and char data types. Note the ‘\0’ byte at the end of the char full_name array. Its quite important. Next I start the checking of size using sizeof. printf("The size of an int: %ld bytes\n", sizeof(int)); printf("The size of areas (int[]): %ld bytes\n", sizeof(areas)); printf("The number of int in areas: %ld \n", sizeof(areas)/sizeof(int)); printf("The first area is %d and the last area is %d \n", areas[0], areas[4]); return 0; After Making the above, you will get an warning about uninitialised/unused variables which is okay, just proceed to run it. This is the result. The size of an int: 4 bytes The size of areas (int[]): 20 bytes The number of int in areas: 5 bytes The first area is 10 and the last area is 20 The above result is straight to the point. Note that for the last statement, I started reading from 0 to 4 since in C arrays start from 0 NOT 1. Proceed… In this next part I investigate the size of characters… printf("The size of a char: %ld bytes\n", sizeof(char)); printf("The size of name(char[]): %ld bytes\n", sizeof(name)); printf("The number of char: %ld \n", sizeof(name)/sizeof(char)); The above code is preety straight foward and this is what I got. The size of a char: 1 bytes The size of name(char[]): 4 bytes The number of char: 4 The last part of the code is to print put the size and content of name and full_name printf("The size of full name(char[]):%ld\n", sizeof(full_name)); printf("The number of chars: %ld\n", sizeof(full_name)/sizeof(char)); printf("name=\"%s\" and full name=\"%s\"\n", name, full_name); And the result shall be… The size of full name(char[]):11 The number of chars: 11 name="Zed" and full name="Zed A.Shaw"
https://wilfred.githuka.com/post/sizeof/
CC-MAIN-2020-24
refinedweb
453
73.21
Pandas DataFrames are my favorite way to manipulate data in Python. In fact, the end product of many of my small analytics projects is just a data frame containing my results. I used to dump my dataframes to CSV files and save them to Github. But recently, I've been using Beneath, a data sharing service I'm building, to save my dataframes and simultaneously turn them into a full-blown API with a website. It's great when I need to hand-off a dataset to clients or integrate the data into a frontend. In this post, I'll show you how that works! I'm going to fetch GitHub commits, analyze them, and use Beneath to turn the result into an API. Setup Beneath To get started, you need to install the Beneath pip module and login with a free Beneath account. It's pretty easy and the docs already cover it. Just follow these steps. Make sure to remember your username as you'll need it in a minute! Let's analyze some data I think Github activity is a fascinating, underexplored data source. Let's scratch the surface and look at commits to... Pandas! Here's a quick script to fetch the pandas source code and aggregate some daily stats on the number of commits and contributors: import io import pandas as pd import subprocess # Get all Pandas commit timestamps repo = "pandas-dev/pandas" cmd = f""" if [ -d "repo" ]; then rm -Rf "repo"; fi; git clone{repo}.git repo; cd repo; echo "timestamp,contributor"; git log --pretty=format:"%ad,%ae" --date=iso """ res = subprocess.run(cmd, capture_output=True, shell=True).stdout.decode() # Group by day and count number of commits and contributors df = ( pd.read_csv( io.StringIO(res), parse_dates=["timestamp"], date_parser=lambda col: pd.to_datetime(col, utc=True), ) .resample(rule="d", on="timestamp")["contributor"] .agg(commits="count", contributors="nunique") .rename_axis("day") .reset_index() ) Now, the df variable contains our insights. If you're following along, you can change the repo variable to scrape another Github project. Just beware that some major repos can take a long time to analyze (I'm looking at you, torvalds/linux). Save the DataFrame to Beneath First, we'll create a new project to store our results. I'll do that from the command-line, but you can also use the web console: beneath project create USERNAME/github-fun Just replace USERNAME with your own username. Now, we're ready to publish the dataframe. We do it with a simple one-liner directly in Python (well, I split it over multiple lines, but it's still just one call): import beneath await beneath.write_full( table_path="USERNAME/github-fun/pandas-commits", records=df, key=["day"], description="Daily commits to", ) There are a few things going on here. Let's go through them: - The table_pathgives the full path for the output table, including our username and project. - We use the recordsparameter to pass our DataFrame. - We provide a keyfor the data. The auto-generated API uses the key to index the data so we can quickly filter records. By default, Beneath will use our DataFrame's index as the key, but I prefer setting it manually. - The descriptionparameter adds some documentation to the dataset that will be shown at the top of the table's page. And that's it! Now let's explore the results. Explore your data You can now head over to the web console and browse the data and its API docs. Mine's at (if you used the same project and table names, you can just replace my username epg for your own). You can also share or publish the data. Permissions are managed at the project layer, so just head over to the project page and add members or flip the project settings to public. Use the API Now that the data is in Beneath, anyone with access can use the API. On the "API" tab of the table page, we get auto-generated code snippets for integrating the dataset. For example, we can load the dataframe back into Python: import beneath df = await beneath.load_full("USERNAME/github-fun/pandas-commits") Or we can query the REST API and get the commit info every day in May 2021: curl \ -d type=index \ -d filter='{"day":{"_gte":"2021-05-01","_lt":"2021-06-01"}}' \ -G Or use the React hook to read data directly into the frontend: import { useRecords } from "beneath-react"; const App = () => { const { records, loading, error } = useRecords({ table: "USERNAME/github-fun/pandas-commits", query: { type: "index", filter: '{"day":{"_gte":"2021-05-01","_lt":"2021-06-01"}}' } }) ... } Check out the API tab of my dataframe in the Beneath console to see all the ways to use the data. That's it That's it! We used Beneath to turn a Pandas DataFrame into an API. If you have any questions, I'm online most of the time in Beneath's Discord (I love to chat about data science, so you're also welcome to just say hi 👋). And let me know if you publish a cool dataset that I can spotlight in the featured projects! Discussion (7) Hi Eric...(First, Sorry for my English.) I follow the example that I find on the git repo(github.com/beneath-hq/beneath/blob...), when I run it , i see this error : SyntaxError: 'await' outside function Can you tell me if i do something wrong? Thanks in advance. Hey Emanuel, Sorry about that! We'll add some info to the docs today to make this more clear. I know you figured it out, but I'll leave this comment here for others to find. The keyword "await" declares an asyncio coroutine (described in depth here: realpython.com/async-io-python/), and there are a few options for running these: In a standard Python script, as you mentioned, you have to wrap each coroutine in an async function, and start the execution with asyncio.run(). This is a good resource: docs.python.org/3/library/asyncio-... In a Jupyter notebook, which I was using for the blog post and the quick start, it's actually much easier. An asyncio context is automatically included, and you can directly run any await ...code block. You can spawn an asyncio-enabled Python REPL by running python -m asyncioin a command line. In this case too, you can directly run any await ...code block. Let me know if anything else comes up and I'd be happy to help! I guess, you need to update the doc...all beneath function must be called with asyncio.run( for example, to read a stream and load in pandas dataframe the code is the next: df = asyncio.run(beneath.load_full("USERNAME/financial-reference-data/s-and-p-500-constituents")) print(df) At last, this work for me. Regards Eric is a great solution Beneath, I will try to use it with streamlit. Hey Emanuel (and anyone else following along here!), I know await/async (asyncio) can be pretty confusing if you haven't used it before – it's perfectly normal to use Python without it, but we think it's such an awesome feature that we wanted to use it for Beneath. To help explain it better, I added a new docs page to explain the what/how/why of it: python.docs.beneath.dev/misc/async... We want to keep the example snippets concise, but we'll try to link to this helper from the examples to avoid confusion :) Thanks for the input! I resolve called beneath in this way : import beneath import asyncio asyncio.run( beneath.write_full( stream_path="USERNAME/financial-reference-data/s-and-p-500-constituents", records=df, key=["day"], description="Daily commits to github.com/pandas-dev/pandas", )) Beneath is a great initiative, really appreciate it. 🥰🥰 Thanks Adwaith!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/ericpgreen/turn-a-pandas-dataframe-into-an-api-57pk
CC-MAIN-2021-31
refinedweb
1,303
64.61
NAMEptrace - process trace SYNOPSIS #include <sys/ptrace.h> long ptrace(enum __ptrace_request request, pid_t pid, void *addr, void *data); DESCRIPTIONThe. delivered,. struct ptrace_peeksiginfo_args { u64 off; /* Ordinal position in queue at which to start copying signals */ u32 flags; /* PTRACE_PEEKSIGINFO_SHARED or 0 */ s32 nr; /* Number of signals to copy */ }; - Currently, there is only one flag, PTRACE_PEEKSIGINFO_SHARED, for dumping signals from the process-wide signal queue. If this flag is not set, signals are read from the per-thread queue of the specified thread. - status>>8 == (SIGTRAP | (PTRACE_EVENT_EXEC<<8)) - If the execing thread is not a thread group leader, the thread ID is reset to thread group leader's ID before this stop. Since Linux 3.0, the former thread ID can be retrieved with PTRACE_GETEVENTMSG. - - Restart the stopped tracee process. If data is nonzero, it is interpreted as the number of a signal to be delivered to the tracee; otherwise, no signal is delivered. Thus, for example, the tracer can control whether a signal sent to the tracee is delivered or not. (addr is ignored.) -ACCES ptraceWhenpid. Stopped states. There are many kinds of states when the tracee is stopped, and in ptrace discussions they are often conflated. Therefore, it is important, PTRACE_EVENT stops,When a (possibly multithreaded) process receives any signal except SIGKILL, the kernel selects an arbitrary thread which handles the signal. (If the signal is generated with signal injection in this manual page. Note that if the signal is blocked, signal-delivery-stop doesn't happen until the signal is unblocked, with the usual exception that SIGSTOP can't be blocked. Signal-delivery-stop is observed by the tracer as waitpid(2) returning with WIFSTOPPED(status) true, with the signal returned by WSTOPSIG(status). If the signal is SIGTRAP, this may be a different kind of ptrace-stop; see the "Syscall-stops" and "execve" sections below for details. If WSTOPSIG(status) returns a stopping signal, this may be a group-stop; see below. Signal injection and suppressionAfter. In other words, SIGCONT may be not the first signal observed by the tracee after it was sent. Stopping signals cause (all threads of) a process to enter group-stop. This side effect happens after signal injection, and therefore can be suppressed by the tracer. In Linux 2.4 and earlier, the SIGSTOP signal can't be injected. PTRACE_GETSIGINFO can be used to retrieve a siginfo_t structure which corresponds to the delivered signal. PTRACE_SETSIGINFO may be used to modify it. If PTRACE_SETSIGINFO has been used to alter siginfo_t, the si_signo field and the sig parameter in the restarting command must match, otherwise the result is undefined. all tracees within the multithreaded process. As usual, every tracee reports its group-stop separately to the corresponding tracer. Group-stop is observed by the tracer as waitpid(2) returning with WIFSTOPPED(status) true, with the stopping signal available via WSTOPSIGIf (SIGTRAP | PTRACE_EVENT_foo << 8). The following events exist: -_FORK - tracer, then WSTOPSIG(status) will give the value (SIGTRAP | 0x80). Syscall-stops can be distinguished from signal-delivery-stop with SIGTRAP by querying PTRACE_GETSIGINFO for the following cases: -[Details of these kinds of stops are yet to be documented.] Informational and restarting ptrace commandsMost_SETSIGINFO, pid, 0, &siginfo); ptrace(PTRACE_GETEVENTMSG, pid, 0, &long_var); ptrace(PTRACE_SETOPTIONS, pid, 0, PTRACE_O_flags); Note that some errors are not reported. For example, setting signal information (siginfo) may have no effect in some ptrace-stops, yet the call may succeed (return 0 and not set errno); querying PTRACE_GETEVENTMSG may succeed and return some random value if current ptrace-stop is not documented as returning a meaningful event message. The callA suppression" raise(SIGSTOP); and allow the parent (which is our tracer now) to observe our signal-delivery-stop. If the PTRACE_O_TRACEFORK, PTRACE_O_TRACEVFORK, or PTRACE_O_TRACECLONE options are in effect, then children created by, respectively, vfork(2) or clone(2) with the CLONE_VFORK flag, fork(2) or clone(2) with the exit signal set to SIGCHLD, and other kinds of clone(2), are automatically attached to the same tracer which traced their parent. SIGSTOP is delivered to the children, causing them to enter signal-delivery ptraceWhen, and if the tracee was PTRACE_ATTACHed rather that PTRACE_SEIZEd,)) the PTRACE_O_TRACEEXEC option or using PTRACE_SEIZE and thus suppressing this extra SIGTRAP is the recommended approach. Real parentSVr4, 4.3BSD. NOTES-system- and architecture-specific. The offset supplied, and the data returned, might not entirely match with the definition of struct user. The size of a "word" is determined by the operating-system variant (e.g., for 32-bit Linux it is 32 bits). This page documents the way the ptrace() call works currently in Linux. Its behavior differs significantly on other flavors of UNIX. In any case, use of ptrace() is highly specific to the operating system and architecture. Ptrace access mode checking: - The real, effective, and saved-set user IDs of the target match the caller's user ID, and the real, effective, and saved-set group IDs of the target match the caller's group ID. -: - The caller and the target process are in the same user namespace, and the caller's capabilities are a proper superset of the target process's permitted capabilities. -. BUGSOn.) Contrary to the normal rules, the glibc wrapper for ptrace() can set errno to zero.
https://jlk.fjfi.cvut.cz/arch/manpages/man/ptrace.2.en
CC-MAIN-2019-30
refinedweb
872
54.32
mobile crushing plant hs code ,mobile concrete crushing plant ,crawler mobile crushing plant customs hs code crusher - lemon-grass.be tariff heading for screening plant | Mobile Crushers ... stone crushing at site, stone crusher China HS code import ... crusher and concrete and idem and title v and ...Get Price Crush Plant Mobile Crushers In China Bejing | Crusher ... Mobile Concrete Crusher Plants ... Find Details about Mobile Concrete Crusher Plants, Mobile … China; HS Code: ... about mobile crusher,mobile crushing plant …Get Price diesel operated ball mill and jaw crusher - saisi.co.za Movable small Stone crushing plant jaw crusher, ... Hs Code . Normally CloseOpen ... 6mm for oil and gas mobile crushing plant hs code ,mobile concrete crushing plant ...Get Price mobile and stationary crushing plant for sale - lenins.co.za stationary crushing plant Mobile ... crushing plants. Mobile crusher as concrete crushers ... Plant Crawler Crushing Plant Features of Mobile Cone ...Get Price Mobile crushing plant, Mobile crushing plant direct … Mobile crushing plant from Zhengzhou Huahong Machinery Equipment Co., Ltd.. Search High Quality Mobile crushing plant Manufacturing and …Get Price China Mobile Concrete Mixing Plant Portable Concrete … ... Find details about China Portable Concrete Plant, Mobile Concrete Mix Plant from Mobile ... HS Code: 84743100 ... crushing plant. stone crushing line ; cone ...Get Price mobile crushing plant - Tanzania Crusher mobile crushing plant - Tanzania Crusher. ... The mobile concrete crusher can be a basic crushing system. mobile concrete crusher ... crawler type mobile crusher and ...Get Price mobile stone crusher jabodetabek - mansco.org mobile stone crusher jabodetabek kingswoodpreschoolorg,crushers crawler type mobile crusher plant ... stone crusher portable » jual concrete crushing ... hs code …Get Price Mobile crusher Hire and sale - smart-university.eu ... please enter the postal code ... Shropshire Time and Money Saving Effective Concrete Crushing Mobile crusher ... Crawler Crusher | Mobile Crushing Plant ...Get Price Joyal-Mobile Jaw Crushing Plant,Mobile Jaw Crushing Plant ... Joyal Mobile Jaw Crushing Plant,Mobile Jaw Crushing Plant which manufactured by Shanghai ... Crawler Mobile Crusher; Crushing Plant. ... +86-21-68763366 Post Code…Get Price concrete mobile crusher in malaysia - industriart.eu Crawler Mobile_Crusher ... grinding mill china iron ore mobile crusher in malaysia crushing plant equipment ... HS Code 84743110 Harmonized System Code Concrete ...Get Price crusher nissui hs code - burulivac.eu K Series Mobile Crushing Plant; Mobile Cone Crusher; ... concrete jaw crusher machine for sale from ... cone crusher crushing capacity, bucket crusher hs code ...Get Price China Construction Waste Crushing Crawler Crusher Plant ... China Construction Waste Crushing Crawler Crusher Plant, Find details about China Limestone Crusher, Crushing Machinery from Construction Waste Crushing Crawler Crusher Plant - Zhengzhou Huahong Machinery Equipment Co., Ltd.Get Price mobile crushing plant hs code ,mobile concrete crushing ... mobile crushing plant hs code ,mobile concrete crushing plant ,crawler mobile crushing plant,US $ 5,000 - 20,000 / Set, New, mobile crushing plant, stone crush.Source from Zhengzhou Huahong Machinery Equipment Co., Ltd. on Alibaba.com.Get Price hs code for stone crusher - makingprojectswork.eu Crawler Mobile Crusher. Crawler Mobile Crusher Portable, Movable, Rugged, Reliable As crushing and screening equipment, ...Get Price concrete crusher hs code - verhuur-doms.be The Mobile Cone Crusher (plants) ... concrete crusher hs code ... concrete crusher hs code , Namibia crushing plant Kenya hs code crushers,hs code for jaw ...Get Price HAZEMAG -Mobile Plants Mobile Plants. Application ... Example: Mobile Crushing Plant for Crushed Sand + ... Example: Crawler Mounted Plant for Recycling +Get Price Joyal-Crushing Plant,Stone Crushing Plant,Crushing Plant ... Joyal Crushing Plant,if you are want to know the features of Crushing Plant,the working ... Mobile Crushing Plant Crushing ... +86-21-68763366 Post Code: 201201.Get Price Concrete Mobile Crushing Plant, Concrete Mobile Crushing ... Alibaba.com offers 1,171 concrete mobile crushing plant products. About 91% of these are crusher, 1% are concrete batching plant, and 1% are other construction material making machinery.Get Price Customs duties mobile rock crusher - YouTube Aug 29, 2016 · Mobile China Crushers Mobile Crushers,Mobile Crushing Plant,China Mobile Crusher ... customs duties mobile rock crusher; ... hs code for jaw crusher, ...Get Price Mobile Crushing Plant - Joyal …Get Price crushing grinding aggregate - fusionpark.eu The Mobile Cone Crusher (plants) ... Crawler Mobile Crushing & Screening Learn more: ... hs codes for exporting crushing machines from south africa;Get Price cobblestone crusher tariff - keslerconstruction.com Hs Code Cstechnology Jaw Crusher Crushing And Grinding Crushing And Grinding Plant Solutions Home Mobile Crusher Stone Crusher , ... concrete waste recycler crusher ;Get Price Track Mounted Mobile Jaw Crusher Plant - made-in … China Track Mounted Mobile Jaw Crusher Plant, Find details about China Jaw Crusher, Mobile Jaw Crusher from Track Mounted Mobile Jaw Crusher Plant - Henan Xingbang Heavy Machinery Co., LtdGet Price mp j series mobile crusher plant - lunarossa-ristorante.eu whirlpool gold series alyst washing machine codes; european series hammer crusher ... Crusher, Crawler Mobile Crushing Plants ... mobile concrete crusher plant ...Get Price Mobile Impact Crusher Plant WL3S1860F1214 in … Mobile jaw crusher plant ...Get Price import crawler mobile crushing station - allthe.eu mobile crushing plant crawler mobile crusher ... plant China crawler ... China HS code ... and concrete.Crawler-type mobile crushing station is a ...Get Price Track Type Mobile Crusher - twadsafewater.in What Type Of Crusher Is Best For Concrete Crushing; ... china mobile crusher, mobile crushing plant from ... track type mobile crusher? Crawler Type Mobile ...Get Price mobile con crusher parts - paparazzi-restaurant.eu ... comprises mobile crushing plant, jaw crusher, ... hs code of power sreen track mounted mobile ... screening plant in india; large mobile concrete crusher ...Get Price Track Mounted Mobile Crushing Plant WL3S1860F1214 … Object: Rock, Hs code: 84741000, Capacity ... quick and convenient installation in site *Integrated motor and control panel Track mounted mobile crushing plant ...Get Price Mobile Portable Concrete Crusher Plant For Sale CE … Mobile Portable Concrete Crusher Plant ... plant/concrete crushing iso 9001 2008, ce; hs code ... moveable jaw crusher,crawler mobile crushing plant ...Get Price rb 600 resonant pavement crusher | Mobile Crushers all ... The product range of our company comprises mobile crushing plant ... Crush concrete road surface 2km/d ... rb 600 resonant pavement crusher. crawler crusher is our ...Get Price New Design And High Efficiency Mobile Crusher Plant … (stone crusher plant in uk high efficiency minggong brand china hs code & import latest china hs code ... Concrete Hydraulic ... (crushing plant/crusher plant mobile ...Get Price crawler mobile crushing process - kbrmc.eu ... Cone crusher,Rock Crusher and Mobile Crusher in China. ... China Hs code Chapter 84 Nuclear ... crawler type mobile crushing plant in france italy uk us grinding ...Get Price Mobile crusher/Mobile Stone Crushers For Sale/ - Mobile ... HS Code:- Ms. shiley liu. Contact ... Mobile Concrete Crusher Costs . Portable Vertical Shaft Impact Crushers (mobile crushing and screening plants) ...Get Price crushing plant training materials Articles and other material in PMC usually include an explicit Concrete crusher hire & plant tool ... Mobile Jaw Crushing Plant 16890129. 20161018Buy ... hs code for ...Get Price Manual of impact crusher used in bauxite crushing plant … Oct 24, 2013 · Manual of impact crusher used in bauxite crushing plant ... crawler crusher mobile impact crushing plant ... codes stone crusher applied for concrete ...Get Price mobile crusher xt - summer-project.eu rock crusher xt reviews Related news mobile rock crusher sale is code for crushing ... concrete crushing plant ... hs 30 Mobile Crusher Coal crusher plant ...Get Price Mobile cone crushing plant - Mobile Screening Plant The Mobile Cone Crushing Plant can crush materials on site or ... Home » Mobile Crushing Plants » Mobile cone crushing plant . ... Crawler mobile crusher plant …Get Price Mobile Crushing Plant Hs Code,Mobile Concrete Crushing ... Mobile Crushing Plant Hs Code,Mobile Concrete Crushing Plant,Crawler Mobile Crushing Plant , Find Complete Details about Mobile Crushing Plant Hs Code,Mobile Concrete Crushing Plant,Crawler Mobile Crushing Plant,Crawler Mobile Crushing Plant,Jaw Crusher Pe250x400,Stationary Jaw Crusher from Crusher Supplier or …Get Price Hs Code Stone Crusher Sand Making Stone Quarry Hs Code Stone Crusher Sand Making Stone Quarry ... concrete crusher hs code ... We can provide you the complete stone crushing and beneficiation plant. hs code ...Get Price Crusher Mobile Cobe - grinvich.eu mobile crusher - China HS code & import ... specification. crushing strength of concrete cube accotding ... 40 Tph Mobile Crushing Plant | Crusher ...Get Price Hydraulic Drive Track Type Mobile Crushing Station Crushing Hydraulic Concrete Crusher ... hydraulic drive track type mobile crushing station. hs code for ... Crawler mobile crushing plant is the company ...Get Price Irock TJ-3046 Portable Crushing Plant | Construction … Irock TJ-3046 Portable Crushing Plant. ... The I44Rv3 mobile impact crusher is designed to combine the productivity of a 44-inch impactor with ... What code is in the ...Get Price stone crushing plant for sale - … The portable crushing plant,mobile stone crusher for sale,rock crusher supplier in china ... Mobile Crusher, Crawler Mobile(portable) Crushing ... hs code for cement ...Get Price hs code para stone crusher partes de la misa sand … ... hs code para stone crusher partes de la ... Aggregates for Concrete in Nigeria; Crushing Plant in ... Mobile Cone Crusher. The Mobile Cone Crusher (plants) ...Get Price metro mobile crushing plant price - hjk-polymers.eu ... Electric Concrete Crusher Machine In Tezpur India · Underground Crushing Hs Code ... coal crushing plant. Shanghai sebang crawler mobile crushing ...Get Price moble crushing and screening plant - aprigliocchi.eu moble crushing and screening plant; mobile crusher permission consent himachal pradesh. ... HS Code Finder / Sri Lanka Customs.Get Price porable crusher plant china - tsimpianti.eu ... Hs code: 84742090, ... Complete Plants ... portable crusher,mobile crushing plant and crawler ... Crusher Manufacturer. Portable concrete ...Get Price mobile crushers on sale - houtenclara.be Mobile Concrete Crusher Plants For Sale,Buy ... mobile crushing plant,mobile crusher plant ... casting and grinding liner hard track for pendular mill hs code ;Get Price Hartl S Hcs Mobile Grizzly For Stones | Crusher Mills ... Hartl S Hcs Mobile Grizzly For Stones. ... Crawler Type Mobile Crusher. ... * Mobile concrete plants … * The crushing machine by impacts Hartl * Vibrant sieve Hartl ...Get Price Home - Powerscreen - Crushing and Screening World Leading Manufacturer of Mobile Crushing & Screening ... Powerscreen Crusher ... A new fleet of portable plants coupled with a key partner effectively ...Get Price Concrete Crusher Hs Code - napira.eu hs code of rock crusher | Mobile Crushers ... plant hs code,mobile concrete crushing plant,crawler mobile ... plant hs code,mobile concrete crushing . ...Get Price mobile mobile coal crusher - fluxon.eu Mobile Crushers,portable Crusher,mobile Crushing Plant. mobile ... hires and contracts equipment to crush and screen concrete, ... Crawler Mobile Coal ...Get Price China Techsheen Cone Crusher Mobile Crushing Plant ... China Techsheen Cone Crusher Mobile Crushing Plant (MP1200CPYQ), Find details about China Mobile Plant, Mobile Crushing Plant from Techsheen Cone Crusher Mobile Crushing Plant (MP1200CPYQ) - Techsheen …Get Price Mobile Crusher Mixers - gategse.eu ... portable plants jaw crusher mobile plant, ... a leader stone crushing & screening plants, and concrete batching plants manufacturer ... mobile crusher - HS Code ...Get Price mobile crushing plant hs code ,mobile concrete crushing ... Concrete Mobile Crushing Plant, Concrete Mobile Crushing ...Concrete Mobile Crushing Plant, Wholesale Various High Quality Concrete Mobile Crushing PlantGet Price ...Get Price china most professional semi mobile crushing plant ce iso Duoling Most Professional Mobile Crusher ... concrete batching ce iso mobile crushing ... semi mobile crushing plants in china. ... HS Code ... Mobile Crushing Plant ...Get Price Hydraulic Driven Track Mobile Plant India - cz-eu.eu Cement Mill India Hs Code Crusher USA ... hydraulic driven track mobile plant india. zenith Mobile Crushing Plants For ... Mobile Plant For Sale in india Crawler ...Get Price double mobile jaw crusher plant - lippetec.eu Mobile crushers are loaded on their own crawler ... mobile stone crushing plants, . Mobile Concrete ... Crusher; , Mobile Jaw Crushing Plant; , ...Get Price crushing plant - webshopfoto.be Mobile Impact Crushing Plant - yfcrusher.com. mobile impact crushing plant has the advantage of reasonable matching, unobstructed discharge, reliable performance, convenient operation, high efficiency and ...Get Price Shanghai kefid Mining Crushing Machinery - Crushers ... Kefid Machinery offers various Crushing plants, such like jaw crusher, ... solutions designed to handle reinforced concrete and asphalt to ... Mobile Crushing Plant:Get Price mobile crusher risk assessment-Heavy Mining Machinery The surface miners made by Wirtgen and the mobile crushing and screening plants made ... mobile crusher – China HS code ... Crushing Plant For Concrete ...Get Price crusher plant code – Grinding Mill China » feldspar crusher new » small concrete ... hs code grinding mill plant - Crusher ... Impact Crusher,Mobile Crusher,Crushing Plant,Grinding Mills ...Get Price Exporting mobile sand crushing plant at Hong Kong ... best value and selection for your jaw crusher crushing plant concrete the mobile sand ... crawler type mobile crusher; ... of jaw crusher machine under hs code ...Get Price track mounted mobile crusher plants manufacturers india ... high performance track mobile jaw crushing plant ... casting and grinding liner hard track for pendular mill hs code; ... track mounted mobile crusher plants ...Get Price mobile screening plant china - quafety.eu Manufacturer mobile crushing & screening, industry milling, sand mobile crushers k series mobile crushing plant crawler mobile crushing plant mobile primary jaw ...Get Price China Gravel Crushing Machine Mobile Crusher Plant - … China Gravel Crushing Machine Mobile Crusher Plant, Find details about China Mobile Crusher, Mobile Crushing from Gravel Crushing Machine Mobile Crusher Plant - Zhengzhou Vipeak Heavy Industry Machinery Co., Ltd.Get Price mobile crushing plant xh from - keukenhanddoeken.be ... a leader stone crushing & screening plants, and concrete batching ... Crawler type Mobile Crushing Plant granite mobile ... ball mill machine hs codeball ...Get Price China High Efficient Mobile Crusher for Stone Crushing ... China High Efficient Mobile Crusher for Stone Crushing, Find details about China Mobile Crushing Plants, Portable Engine Crusher from High Efficient Mobile Crusher for Stone Crushing - Shanghai DingBo Heavy Industry Machinery Co., Ltd.Get Price mobile crushing plant - kss-gmbh.eu Mobile Crushing Plant, Mobile Stone Crusher ... a leader stone crushing & screening plants, and concrete ... Mobile Crusher,Mobile Crushing Plant,Crawler mobile ,Get Price cone crushing plant import trader in Hong Kong import data and price of jaw crusher machine under hs code. ... jaw rock stone crusher. h horizontal concrete crushing plants hong ... crusher mobile crushing plant ...Get Price crusher hs code - verhuur-doms.be The Mobile Cone Crusher (plants) ... Namibia crushing plant Kenya hs code crushers,hs code for jaw Kenya crusher hs code for jaw Kenya ... concrete grinding ...Get Price Hammer Crusher - Newest Crusher, Grinding Mill, Mobile ... Mobile Crushing Plant; ... Hammer crusher is mainly suitable for crushing various soft and medium-hard ore, ... (like concrete, highway, railway, brick, ...Get Price Mobile Crusher Plant, Mobile Stone Crusher Plant - … Mobile Crusher Plant, Mobile Stone Crusher Plant, Find Details about Stone Crusher, Crusher from Mobile Crusher Plant, Mobile Stone Crusher Plant - Shanghai Zenith Mining and Construction Machinery Co., Ltd.Get Price BENEFITS OF USING MOBILE CRUSHING AND … BENEFITS OF USING MOBILE CRUSHING AND SCREENING PLANTS IN QUARRYING CRUSHED STONE 1. ... the mobile plants (crawler excavator)Get Price hs code for jaw crusher - grinvich.eu high speed crusher – China HS code & import tariff for high ... mobile crushing screening plant co., ... china high performance crawler mobile stone crusher machineGet Price 20-950tph Diesel Mobile Stone Jaw/Cone Crusher … HS Code: 23151602 ... Feldspar crusher and feldspar crushing plant . ... Manufacturers Mobile Rock Aggregate Concrete Limestone Granite Stone Crushing PlantGet Price conmat mobile batching plant - gigsgh.org Hs code mobile concrete batching plant ... customers can take this machine directly to rock crushing plants or demolition sites for powerful, ... Mobile Cone Crusher.Get Price Mobile Crusher Plant, Mobile Stone Crusher Plant ... Mobile Crusher Plant, Mobile Stone Crusher Plant WL3S1860F1214 in Shanghai, ... Hs code: 84741000. Capacity: ... Mobile crushing machine.Get Price -Crawler Mobile Crusher Plant Manufacturer India- Small Mobile Concrete Crushing And Screening ... Hot sale crawler mobile crusher plant supplier ... Exporter and Manufacturer of mobile stone crusher plants, ...Get Price small stone crusher plant - studioko.eu Dragon Machinery - Mobile Crushing, Screening & Washing Plant. Mobile Crushing and Screening Plant Factory, Aggregate ,Basalt ,Granite Mining Crusher Plant Factory ,Portable Stone Crusher , Screener ManufacturerGet Price Manufacturers of mobile crushing plant and Suppliers … manufacturers and suppliers of mobile crushing plant from ... EXTENSION HOPPER UNIT HS CODE 8474200 MOBILE CRUSHING PLANT ON CRAWLER CHASSIS ... MOBILE CRUSHING, ...Get Price semi mobile cone crushing station - … mobile cone crusher crushing plant ; mobile crushing station crawler ; mobile crushing station ... mobile crushing and screening plants in peru ; mobile concrete ...Get Price hs code for jaw crusher - hairtransplantdoctorsindia.in hs codes mining equipment crusher plant ... net Mobile Jaw Crusher hs code for grinding ball ... for Concrete in Nigeria; Copper Ore Crushing Plant in ...Get Price self propelled crusher hs code china high speed stone crusher plant ... china crusher stone crushing all ; china mobile cane crusher ; ... » self propelled crusher hs code china » jaw crsher crushing ...Get Price Exporting mobile sand crushing plant at Hong Kong ... equipment in saudi arabi mobile crushing plant mobile jaw crusher/cone ... for concrete recycling.mobile crushing plant equipment ... under hs code 8474 . …Get Price mobile crusher crawler mobile - lemon-grass.be Crawler Crusher, Crawler Mobile Crushing Plants ... Mobile Crusher,Mobile Crushing Plant,Crawler mobile , ... jaw crusher materials ; cement mill liners hs code ...Get Price hs code of rock crusher | Mobile Crushers all over the … hs code of rock crusher. ... (aggregate plant, crusher plant, roc hs code limestone handling equipment, ... Mobile Concrete Crusher; Mobile Jaw Crushing Plant;Get Price Mobile Crushing Plant_SANME The Mobile Crushing Plant of SANME includes jaw crusher, impact crusher, cone crusher, vibrating screen and other combination type, which can be transported wholly-assembled and apart.Get Price Low Cost Automatic 35 M3/H Mobile Concrete Batching Plant ... Concrete Mixing Plant from Low Cost Automatic 35 M3/H Mobile Concrete Batching Plant ... HS Code: 84743100 ... crushing plant. stone crushing line ; cone crusher ...Get Price Crushing & Screening Crushing Mobile - Search New & … Find new and used Crushing & Screening Crushing Mobile for sale in ... equipped mobile impactor plant with ... 1300 Mobile Cone Crusher Good ...Get Price mobile crushing plant price - … Crawler mobile crusher for sale,track ... recycling concrete,concrete crushing plant,mobile concrete crusher ... hs code for hammer mills efficient crushing type ...Get Price Mobile Crusher Classifiions - sailmarine.eu The crawler mobile crusher can be sorted into the ... -mobile concrete crushing in victoria australia. ... Sand and Gravel Crushing Plant Mobile Crusher,Mobile ...Get Price China Mobile Track Crushing Plant, Crawler Crusher Plant ....Get Price
https://www.semasir.com.ng/HZS/10282.html
CC-MAIN-2022-21
refinedweb
2,962
61.73
Directory Listing netcdf build option is now 3-state no, 3, 4 I haven't bumped the options file version because my changes should be backwards compatible Weipa netcdf update passes unit tests -DNETCDF4 and only directly links: netcdf_c++4 Need to clean up scons options control More netcdf4 work Still need to convert weipa Work on new netcdf Work on new netcdf interface It passes unit tests .... Helper function to open netcdf netcdf4 version of load hiding some variables Removing debug vectors from debug builds since netcdfc++4 doesn\'t seem to like them Forgot to check with MPI Work towards netcdf4 partial workaround of complex testing issue Reader for netcdf classic headers. bug to obtain function space fixed. magnetic intensity added. This version seems to work Added stdlocationisprefix option This option will set the STD_LOCATION to the build prefix Need this for building on magnus new section in inversion guide fix name of macports boostio because Darwin different fix it Attempt to make python detection better fix typo. Another case of allowing complex Some more tests More helpful error message Biting bullet and doing separate generator for matrix More tests Still need to look at the MVp testing Decree: Tensor products are not always real. Test for outer Unbreak ripley Brick. Let ripley's auto decomposition deal with this test. For <=4 ranks this now succeeds, let's see what happens on Savanna... Fixing default decomposition policy in ripley (ouch!). Forgot to reset the grid spacing members so despite reporting the correct values the domain did get expanded! Fix test that was way too strict - obviously something in GDAL changed that caused the failures when comparing data origin. But 7 digits are far too many given the data is in meters. syncing savanna options with actual dev-deps module. We had switched to intel 17 a while back. docstring updated. complex integrals for ripley - part of #398. This is not properly tested. Trying to integrate complex Data on other domains now throws rather than segfaulting. Fix failing unit tests Make tuples acceptable to getRank and getShape Initial tests binary operation Add zeroed copy reductions no longer ignore tags with default values Also fix the tests which depended on broken Lsup But it's better this way anyway Fix unit test failure Now includes tests of all ops in the original test set These tests do not include tests of symbol inputs but those aren't tested by the old set either Fix bug in get_tagged Adding test for eigenvalues - not currently testing the scalar input case More tests Rename data_only to no_scalars [which is more accurate] Fix test failures ensure inf and sup raise for complex ndarray hasNaN does support complex hermitian tests debug output Stop (anit-)hermitian from using numpy.transpose It doesn't default to the same axes swaps as our's does Correct docstring for antihermitian Fix build failure and add test Used a common base type for exception which both py2 and py3 know about. Can now specify that one expects certain ranks to throw and to specify what sort of exceptions one expects. There is currently no way to specify which exceptions should be thrown in which circumstances. I'm hoping that won't be required. More debug and test for transpose More tests Added tests and reordered some. Ensure that all complex possibilities are tested for exceptions not just scalars. Added min and max rank parameters - first use is to limit matrix inversion to 2x2 cases. Tests now check rank-4 cases. Change in whereX semantics The whereX functions now raise exceptions for complex numpy arrays. They did not used to do this but this change brings them into line with what Data does Tests and fix to util.length Added tests for maxval, minval and length. util.length(z) now takes abs(z) of before applying inner product Have unary tests for the trig now More tests using the more general tester. Added type to the Lsup type error message to help with future debugging. The unary tests currently have some duplication because all of the old tests run as well as the new tests but I'll fix that once the new tests cover everything. Adapt to moving openmpi library This fix assumes that the architecture you are linking to is the one which the include headers are symlinked from. Added '--mca io romio314' to guineapig options to get around a regression(?) in OMPIO. Basically it looks like current OpenMPI in Debian blocks when calling MPI_File_write_shared from a subset of ranks only (which we do in a test). My understanding of the spec is that this should not block as it's not a collective operation and it certainly works with ROMIO and IntelMPI which is why I think it may be a regression. This should probably be asked on the OpenMPI mailing list. I have a test case if required. Starting work on the unary ops Removing references to the old util_reduction tests. Note that this does weaken the ability to test domains which do not support tagging because all types are tested together now. However, if that becomes necessary, we can query the functionspace and not add tagged tests if it reports not supporting tagging. At this point, more compact tests are more important. Switch over to new util tests. Fix some bugs Reduction tests Enable NAN_CHECK for all builds. If you don't support std::isnan, then you don't support c++11 Make sure some of the test values are negative More progress Start on test restructure some modifications VGL arguments fixed namespace problem fixed. Remainder of this batch of default removing. Fix compile error introduced last time. Fix more calls in Dudley Throw. fix for #387 in ripley - implementation of complex straightforward interpolations. Final pieces for #386: Complex gradients for Speckley 3D order 2-10. Removed workaround from escript. #386: speckley complex gradients for 2D orders 3-10 #386 - complex gradient for speckley 2D. #386: complex gradients for dudley. Part of #386 - complex gradients for finley. Part of #386 - complex gradient for ripley Brick. workaround for Bug #389 removed Complex interpolation changes for ripley. Needs better tests and removal of work-around once the other domains are done. Initial work on fixing the complex interpolation issue. This is just Ripley::Rectangle others to follow Beginnings of interpolation fixes for complex. I have not undone the python work-around because I still need to look at the domains. Starting with ripley Fixing copyWithMask to work with complex Data (issue #389) prepare for complex gradients. Workaround in util.py still guards against badness. and the accompanying header changes to the last commit... implemented complex gradient for ripley 2D. Code path is still disabled in escript until all other cases are implemented. Fix Scalar with complex #392 Fix segfault in slice assignment with complex. Addresses issue #394 Fix TensorX factories to allow passing a complex scalar. Issue described in #394 (although that wasn't the main point of that bug). Fix setToZero Prevent saveDataCSV from being called with complex values work around for bug #391 (right version now) work around for bug #391 Added untested .phase() to Data. work around for bug #390 work around for bug #386 Very minor fix towards more repeatable builds print statement removed work around for Bug #389 replaced complex argument by isComplex as complex is python keyword. recording badgers trilinos options interpolation from reduced to full for finley Function. needs more work Fixing the openmp options for icpc. make sure to copy python lists rather than adding on or PrependUnique is useless... reverted accidental commit of SConstruct. build_shared from most files. esysUtils is gone and paso now has to be a shared lib to allow python bindings. I haven't bumped the options file version yet as there's more to come before the release (trilinos etc) and the plan is to overhaul the scons stuff once again.. moved file writer to escript and added explicit link to escript in paso. found a few instances of MPI_COMM_WORLD and squashed them. Added member to AbstractDomain that returns a JMPI so we ensure all domains support that. Moved appendRankToFilename into. hmm, this change was necessary on Trusty for some reason, it shouldn't break other builds... Removed dependency on lsb-release Added new entry to debian/changelog debian/rules now does not make any use of custom subst files (always builds for "sid") Updated makesrc.sh to build an "orig" tarball which we can use as our source release. options file for our new slave. Updated version info and remove all applied patches from the quilt series bringing changes over from debian for 4.1 The quilt patch series do not make sense at the moment because most of them will already be encorporated Added non-version specific options file for debian changes to support fixing libraries build on osx - enable with osx_dependency_fix. Also added options file for OSX.11 and homebrew removing dud commmented code MT allows now a background field More lintian fixes Fixing some lintian objections Minor cleanup Produces what looks like correct answers on Jaco's example. Still to do: . Clean up extraneous output . Doco on what the various variables do . Discussion on difference between this mode and the normal downunder. (Should the same changes be made to normal downunder?) . Document more clearly assumptions which have been made. - The only one which comes to mind is the one where only the regularisation term actually matters for one of the calculations. not supposed to do that Added debian/watch file It turns out that uscan --no-download --verbose does not like ~ in upstream version numbers. We may need to do a mini re-release of 4.1 to fix that Provide access to splitworld via the escript package Fix to stop clang complaining. Adding a script to change references in libraries on osx Removing unnescessary .dirs files Removing unnecessary dep - it is picked up by the auto-stuff updating savanna options to use intel 2016 and cuda 7.5 Changes for new interface turns out x should be because sigmaprim - sigma this is because there is a negative in the x template Type fix for parmetis with long indices removed obsolete comment. and another skip without gmsh. skip test without gmsh. fixed failures for MPI builds with single-rank runs fix running unit tests when prefix is set to somewhere else. Working on package testing Allow openmpi to execute when there is no rsh/ssh Fixed a bug in saveDataCSV where mask could be on a different domain and/or function space than the data objects. Adding the mpi binaries for building Running sanity check from build directory DOF and reduced DOF can now be passed to ownSample() in Finley. another fix for _XOPEN_SOURCE warning in tests. The MPI file view resets file pointer offset so explicitly set to EOF when appending. Should fix test failures. fix for non-MPI build DataExpanded: avoid lockups in dump() due to unnecessary MPI comms. Also code cleanup. Cleanup in escript utils and use FileWriter to avoid code duplication. Also added sanity checks to FileWriter. actually *USE* MPI IO in ripley rather than pretending, d'oh. Hopefully addressing some of the packaging issues Trying again to default to sanity. move sanity out of always build until ordering can be resolved Fixing default builds fix for Bug #326 Removed some order 2 tests in dudley and fixed docstring to make clear that dudley only supports order 1. Avoid netCDF warning by calling del on local references to mmap'ed data Fix for bug #328 Commented use of makeZeroRowSums in Transport solver, see bug #326 Fixed typo in datamanager. x!=None -> x is not None fixes wrong order for subtraction. based on ralfs notes. index_t fixes in DataExpanded. switch to impi 5.1 64-bit index fixes in finley. Another case of 'we are setting the wrong params for third-party libs' + we are now setting a flag to use two-level factorization with more than 8 OpenMP threads. --> runtime for a 2000x2000 poisson problem with block size 3 running on 20 threads is down from 168 seconds to 91.5 seconds! (Paso PCG: 466 seconds) Can compile with MKL and long indices now. updated templates to use correct type for domains parameter. another non-finley test failure fixed. added guards for non-finley builds. made finley::Mesh::write long index compatible. makeTranformation deprecated, left as a wrapper of the new makeTransformation fixing UMFPACK numerical issues due to implicit METIS ordering not existin gin newer UMFPACK versions (>5.2) added bessel function of first and second kind pushing release to trunk fixing the python3 fix that was fixed for python2 but broke 3 undoing mistaken commit of types.h fixing python 3 compatibility change where overriding a type in scope destroys the global type referece completely (result is py2 no longer complaining) making long as a type work in python3 replacing xrange for py3 compatibility fixing MT tests for MPI Tests and tweaks for magneto-telluric Fixing a python3 issue with voxet example now with proper namespacing on MT example fixes python3 prints for 2D MT source tab->space replacements again adding detection of missing direct solvers to MT tests python3ifying inequality tests in MT somehow ReadGmsh escaped removed dcresdomgen removal of dcdomaingen from trunk for now making MT examples python3 and MPI safe Remove blanket import of escript into inversions. Modify __all__ to allow "hiding" some members of escript from a blanket import Changing scoping for splitworld internals. Experiments on hiding them to follow later. Added Ralf's examples. Tests for correctness to follow later. updates of test to prepare for removal of dcresdomgen made namespace use consistent between header and cpp to help poor doxygen find the right functions updating doxygen config file version Removing subworlds from the user guide until we tweak the interface. Minor fixes to splitworld doco. fixing some issues in splitworld doco correcting exception message text commiting updates before removal Adding split example to example dir Added text for SubWorlds. adding division to future imports for voxet reader example. index fixes and some code cleanup in finley. Introduced index_size attribute to finley dump (netCDF) files. correcting readGmsh failures with older version meshes We now have an arena where any headers which insist on being first because they do bad things, can battle to the death. esysutils/first.h should only be included in .cpp files. #define the macros for the features you need in that .cpp For this to work, any such claims to primacy must be resolved in first.h, so please don't try to hack pretenders to the throne into the include hierachy. Next, will be removing some older attempts at the same thing. Fix for python compile warnings fixing uninitialised int that really was initialised before use a more improved readGmsh for less awkward MPI silences Passes some tests. Still has bug. Make tests write to the correct directories Fixing some unit test failures on osx Fixing more tests to check for the optional natgrid fixing more cases of indexing empty vectors updating reference meshes fixing non-mpi builds some fixes for the new node tagging from gmshreader fixes to readGmsh off-by-one fixing some indexing into empty vectors, related to #291 updated tags in fly files CPU optimizations for block size 1 symmetric spmv (DIA) which is now also faster than non-symmetric on i7. block size 2 optimizations. Significant speed-up for symmetric spmv on CPU. This is now faster than non-symmetric on i7. python 3 print non exponential mapping adding options to record time and cycle number in silo and VTK files More cache-friendly version of CPU-based symmetric-CDS-SpMV for block sizes >2. block size 3 special case on GPU. -Enabled use of texture memory by default (needs config option). -split symmetric spmv to separate file and added specialized version for block size 2. well, this was embarrassing...]. more type changes/fixes. speckley: more informative functionspace errors for coupled interpolation, removed points from tagsinuse to match ripley Some more long-index work. adding test skip for couplers and multiprocess adding omitted copyright header adding speckley to ripley interpolation (single process only) removing accidentally included and incomplete cross domain work fixing badly behaved dirac points in speckley. fixed non-MPI builds with speckley added MPI support and tests to speckley, cleaned out more ghostzone code fixing missed MPI type merge from ripley fixing a bad index calculation more redundant includes removed fixing last remnants of speckley::SystemMatrix removing troublesome future-predicting #include minor comment fix for ripley Adding speckley (high-order) domain family, doesn't yet support MPI, Reduced functionspaces, or PDEs using A, B and/or C Made more methods pure virtual in AbstractDomain and moved the generic implementation into NullDomain. Made TestDomain derive from NullDomain to inherit the methods. Some code cleanup and int->dim_t replacements in ripley. Separated pattern generation from connector generation in ripley Brick to allow using alternative System Matrices. Separated generation of paso matrix pattern and coupler/DOF Map in Rectangle. This is in preparation for introducing non-Paso matrices. Brick still to do. Further untangled/simplified paso pattern generation in ripley... Fixed another race with file deletion. minor revisions Fix recent mpi test failures by giving ranks>0 some time to check their output before deleting test files. Finally implemented multipliers for readBinaryGrid under MPI and fixed tests accordingly. We can now read data at resolution x into a domain at resolution N*x properly. Fixed undefined behaviour when indexing empty stl containers to pass pointers (i.e. &vec[0]). More readBinaryGrid tests. There are a few 'expectedFailure's for >1 rank. This is a shortcoming of the tests, not of the actual implementation, so they need more work. ends if tests fail again byteswap.h availability does not mean bswapXXX works so check that. (crayc++ does not support inline assembly) "The Queen's commit". Happy birthday, Implemented writing of binary grids for FLOAT64 in non-native byte order. Write tests pass now fully on debian. This depends on uint64_t which is assumed to be available... revamping testrunners, now uses automated discovery and allows running specific tests without modifying files (see escriptcore/py_src/testing.py for more info/examples) removing missing/leftover test files from test list removing run_utilOnFinley3.py as it is a duplicate of run_utilOnFinley.py.
https://svn.geocomp.uq.edu.au/escript/trunk/?pathrev=6509&sortby=log&view=log
CC-MAIN-2019-18
refinedweb
3,075
55.74
In the first part of this tutorial, I concentrated on React views. In fact, I didn’t do anything else. Except for a little bit of in-component interactivity, everything that resulted from those views was static DOM. I could have written a little bit of JavaScript inside the HTML page to get the same effect. Smaller applications like these seem to make the frameworks cumbersome, but they demonstrate core features in an isolated setting. React and Flux (and pretty much any other architecture, framework or library) really come into their own in larger applications. The same is true of Flux. Flux is the architecture that goes along with React. It does the same job as MVC – an interactive page – but in a very different way. Flux is definitely not MVC, as you can see from the architecture diagram: In a Flux architecture, the Dispatcher is a central co-ordination point. It receives actions and dispatches them to stores. Stores react to those actions, adjusting their data, before informing any dependent views of a change. The actions can come from anywhere – external events like AJAX loads as well as user events like clicks or changes. Normally, I would go to a library to implement such an architecture. If I were in an MVC world, I would be getting AngularJS or EmberJS to assist. There are lots of libraries out there (Fluxxor, Reflux and Alt to name a few). They all implement the full Flux architecture, have great tutorials in written and video form and are full featured. However, I found them difficult to wrap my head around. That’s mostly because I was wrapping my head around Flux – the design pattern – and their library at the same time. I’m more interested in the design pattern. So I wrote my own. I’m calling this “Flux Light”. It doesn’t implement some of the more cumbersome features of Flux. It’s written entirely in ES6 and it’s opinionated. It expects you to write your code in ES6 as well. It also expects no overlapping stores (i.e. one store does not depend on another). It expects to be bundled with browserify or similar. I make no apologies for this. It’s a learning exercise and nothing more. (Of course, if folks feel that it’s useful to them, let me know and I will publish it on npm). The code for the library is located in Client/lib in the repository. I’m going to show how to use it. The Dispatcher The Dispatcher is the center of the Flux architecture. It has a couple of requirements: - Components should be able to use dispatch() with an Action - Stores should be able to register for actions All actions flow through the Dispatcher, so it’s code surface should be small and simple. This code is located in lib/Dispatcher.js. That is only the class though. You will want to initialize the dispatcher. Do this in Client/dispatcher.js like this: import Dispatcher from './lib/Dispatcher'; var dispatcher = new Dispatcher({ logLevel: 'ALL' }); export default dispatcher; The only options available are logging ones right now. I use the logLevel to set the minimal log level you want to see in the JavaScript console. The most common log levels will be ‘ALL’ or ‘OFF’. If you don’t specify anything, you will get errors only. Actions Actions are just calls into the dispatcher dispatch() method. I’ve got a class full of static methods to implement actions, located in Client/actions.js, like so: import dispatcher from './dispatcher'; export default class Actions { static navigate(newRoute) { dispatcher.dispatch('NAVIGATE', { location: newRoute }); } } I can use this class to generate actions. For example, I want to have the NAVIGATE action happen when I click on one of the navigation links. I can adjust the NavLinks.jsx file like this: import React from 'react'; import Actions from '../actions'; class NavLinks extends React.Component { onClick(route) { Actions.navigate(route); } render() { let visibleLinks = this.props.pages.filter(page => { return (page.nav === true && page.auth === false); }); let linkComponents = visibleLinks.map(page => { let cssClass = (page.name === this.props.route) ? 'link active' : 'link'; let handler = event => { return this.onClick(page.name, event); }; return (<li className={cssClass} key={page.name} onClick={handler}>{page.title}</li>); }); return ( <div className="_navlinks"> <ul>{linkComponents}</ul> </div> ); } } Here I tie the onClick method via an event handler to the click event on the link. That, in turn, issues a navigate action to the dispatcher via the Actions class. Somewhere, I have to bootstrap the dispatcher. That’s done in the app.jsx file: import React from 'react'; import dispatcher from './dispatcher'; import AppView from './views/AppView.jsx'; dispatcher.dispatch('APPINIT'); React.render(<AppView/>, document.getElementById('root')); The AppStore In my prior example, I was passing the pages and route down from the bootstrap app.jsx file. I’m now going to store that information in a Store – a flux repository for data. I’ve got a nice API for this. Firstly, there is a class you can extend for most of the functionality – lib/Store. You have to implement a constructor to set the initial state of the store and an onAction() method to handle incoming actions from the dispatcher. The API for the Store contains the following: - Store.initialize(key,value) initializes a key-value pair in the store. - Store.set(key, value, [squashEvents=false]) sets a key-value pair in the store. Normally, a store changed event will be triggered. If you set squashEvents=true, then the store changed event is squashed, allowing you to set multiple things before issuing a store changed event. - Store.get(key) returns the value of the key in the store. - Store.storeChanged() issues a store changed event. - Store.registerView(callback) calls the callback whenever the store is changed. Returns an ID that you can use to de-register the view. - Store.deregisterView(id) de-registers a prior view registration. Creating a store is simple now. You don’t have to worry about a lot of the details. For instance, here is my store/AppStore.jsx file: import Store from '../lib/Store'; import find from 'lodash/collection/find'; import dispatcher from '../dispatcher';))); } onAction(actionType, data) { this.logger.debug(`Received Action ${actionType} with data`, data); switch (actionType) { case 'NAVIGATE': let newRoute = this.getNavigationRoute(data.location); if (newRoute !== this.get('route')) { this.set('route', newRoute); window.location.hash = `#${newRoute}`; } break; default: this.logger.debug('Unknown actionType for this store - ignoring'); break; } } getNavigationRoute(route) { let newRoute = find(this.get('pages'), path => { return path.name === route.toLowerCase(); }); if (!newRoute) { newRoute = find(this.get('pages'), path => { return path.default && path.default === true; }); } return newRoute.name || ''; } } var appStore = new AppStore(); dispatcher.registerStore(appStore); export default appStore; The constructor uses initialize() to initialize two data blocks – the pages and the route from the original app.jsx. The onAction() method listens for NAVIGATE actions and acts accordingly. At the end, I create a singleton version of the AppStore and register it with the dispatcher – this causes the dispatcher to send actions to this store. The Controller-View In the Flux architecture, there are two types of React components. Controller-Views are linked to one or more stores and use state to maintain and update their children. Other React components are not linked to stores and DO NOT USE STATE – they only use props. Controller-Views need to do the following: - Call registerView on each store they are associated with when the component is mounted (use the componentWillMount() lifecycle method) - Call deregisterView on each store they are associated with when the component is about to be unmounted (use the componentWillUnmount() lifecycle method) In my previous post, I created an AppView.jsx to use state with the start of a router. I can alter that to become a Controller-View: import React from 'react'; import appStore from '../stores/AppStore'; import NavBar from '../views/NavBar'; import Welcome from '../views/Welcome'; import Flickr from '../views/Flickr'; import Spells from '../views/Spells'; class AppView extends React.Component { constructor(props) { super(props); this.state = { pages: [], route: 'welcome' }; } componentWillMount() { this.appStoreId = appStore.registerView(() => { this.updateState(); }); this.updateState(); } componentWillUnmount() { appStore.deregisterView(this.appStoreId); } updateState() { this.setState({ route: appStore.get('route'), pages: appStore.get('pages') }); } render() { let Route; switch (this.state.route) { case 'welcome': Route = Welcome; break; case 'flickr': Route = Flickr; break; case 'spells': Route = Spells; break; default: Route = Welcome; } return ( <div id="pagehost"> <NavBar pages={this.state.pages} route={this.state.route}/> <Route/> </div> ); } } export default AppView; The constructor just sets some empty state variables. When the component is mounted, the view is registered with the store and the state is updated from the store. When the component is unmounted (or just before), the view is deregistered again. The function updateState() is the callback that the store uses to inform the view of an updated state. In the render() function, I’ve expanded the number of routes to the full complement. This also matches the list defined in the AppStore. If you wanted to keep the definition of the pages in one place, you could issue an action from this component to set the pages, then let the state update them once they’ve gone through the system. You would not want to just set the pages within the store. That would most definitely not be flux-like. To round out this sequence, I’ve added a couple of pages – Flickr.jsx and Spells.jsx – as simple static React components. You can get this code drop from my GitHub Repository. Handling AJAX in a Flux Architecture Let’s say I wanted to fill in the details of the Flickr page. This is designed to bring the first 20 images for a specific tag back to the page. To do that, I need to make an AJAX JSONP request to the Flickr API. In a React world, I will do two actions. The first will be when the Flickr page comes into focus. I want to issue a “REQUEST-FLICKR-DATA” action at that point. This will cause the store to kick off the AJAX request. When the request comes back, the store will issue a “PROCESS-FLICKR-DATA” action with the data that came back. This ensures that all stores get notified of the AJAX request and response. Why the store? The store is the central source of truth for all data within a Flux architecture. The views should not be requesting the data. Why the second action? Well, in this simple application, we could use just one action – the request. However, let’s say you had another store that counted the number of in-flight AJAX requests – or one that did nothing but do AJAX requests (for example, to allow the inclusion of authentication tokens). You may have one store handling the request and one store handling the response. To implement this, I first added the actions to my actions.js file: import dispatcher from './dispatcher'; export default class Actions { static navigate(newRoute) { dispatcher.dispatch('NAVIGATE', { location: newRoute }); } static requestFlickrData(tag) { dispatcher.dispatch('REQUEST-FLICKR-DATA', { tag: tag }); } static processFlickrData(data) { dispatcher.dispatch('PROCESS-FLICKR-DATA', data); } } Then I altered the onAction() method within the stores/AppStore.js file: case 'REQUEST-FLICKR-DATA': let lastRequest = this.get('lastFlickrRequest'); let currentTime = Date.now; let fiveMinutes = 5 * 60 * 1000; if ((currentTime - lastRequest) > fiveMinutes) { return; } $.ajax({ url: '', data: { tags: data.tag, tagmode: 'any', format: 'json' }, dataType: 'jsonp', jsonp: 'jsoncallback' }).done(response => { Actions.processFlickrData(response); }); break; case 'PROCESS-FLICKR-DATA': this.set('images', data.items); break; In this case, I only request the Flickr data if five minutes have elapsed. When the response comes back, I trigger another action. This is, in my case, processed just below the request by the PROCESS-FLICKR-DATA block. If five minutes have not elapsed, then no request is made and no changes to the page are made. You can flick back and forth between the welcome and flickr page all you want – it won’t change. Of course, there was some setup required for this code to work: import Store from '../lib/Store'; import find from 'lodash/collection/find'; import dispatcher from '../dispatcher'; import Actions from '../actions'; import $ from 'jquery';); } Don’t forget to add jquery to the list of dependencies in package.json or via npm install --save jquery. Now that I have the actions and store sorted for the new data source, I can convert Flickr.jsx to a Controller-View and render the images that get loaded. I’m going to kick off the request in the constructor. Since I have a rudimentary cache going, it won’t hit the Flickr API badly. Here is the code in views/Flickr.jsx: import React from 'react'; import Actions from '../actions'; import appStore from '../stores/AppStore'; class Flickr extends React.Component { constructor(props) { super(props); this.state = { images: [], tag: 'seattle' }; Actions.requestFlickrData(this.state.tag); } componentWillMount() { this.appStoreId = appStore.registerView(() => { this.updateState(); }); this.updateState(); } componentWillUnmount() { appStore.deregisterView(this.appStoreId); } updateState() { this.setState({ images: appStore.get('images') }); } render() { let images = this.state.images.map(image => { let s = image.media.m.split('/'); let fn = s[s.length - 1].split('.')[0]; return ( <div className="col-sm-6 col-md-3" key={fn}> <a className="thumbnail"><img src={image.media.m}/></a> </div> ); }); return ( <section id="flickr"> <h2>Flickr</h2> <div className="row">{images}</div> </section> ); } } export default Flickr; There are a couple of things I could have done differently here. Firstly, I could have requested the flickr data only when the component was mounted. This version asks for the data when the component is created. I wanted to minimize those round trips to the Flickr API during testing and this seemed a reasonable way of doing it. Given that I have the cache functionality in the store, I could reasonably see moving the request to the componentWillMount() method. I could have made the Flickr component just a regular component. This would have required me to change AppView so that it made the request and passed that down when the page was instantiated. I didn’t think this was a good idea. As the application expanded, I would have all sorts of extra code in the AppView. Making each “page” have state and be connected to a store seems much more reasonable to me. Finally, I could have made the Flickr data its own store. In this application expands, you naturally want to have stores handle specific data. For instance, you might have a store that deals with “books” or “friends” or “images”. Inevitably, you will have a simple store that deals with navigation and authentication. So, yes, I see merit in creating another store called ImageStore in a larger application. It seemed overkill for this application. You can get the code from my GitHub Repository. Wrap Up This is the end of the React/Flux version of the Aurelia tutorial, but not the end of my tutorial. I like to add authentication and monitoring to my applications as well, so I’ll be taking a look at those next. This is a good time to compare for the four frameworks I have worked with. I’ve worked with Angular, Aurelia, Polymer and React/Flux now. Let’s discuss each one. I felt Angular was heavy on the conventional use. It didn’t allow me to be free with my JavaScript. I felt that there was the Angular way and everything else was really a bad idea. Yes, it did everything I wanted, but the forklift upgrade (coming for v2.0) and the difficulty in doing basic things, not to mention the catalog of directives one must know for simple pages, meant it was heavier than I wanted to use. Aurelia was a breath of fresh air in that respect. It worked with ES6, not against it. Everything is a class. However, the lack of data management right now, plus a relatively heavy-weight library on the download makes this a poor choice for simple applications. I suppose I could have done the same as I did here and used browserify to bundle everything together, but it isn’t easy. Polymer is a third of a framework, in much the same way that React is a partial framework. It can’t stand alone. You need page routing and data management. These are, in the Polymer world, extra web components that you just download. Since there is no “import” system that allows for importing from standardized locations, you end up with a lot of hacking. Polymer will find a place, particularly when you consider the upcoming ES6-compatible MVC architectures like Angular-2 and Aurelia. React/Flux is definitely focussed on large applications. Even my modest tutorial application (and not including the library code I wrote) was a significant amount of code. However, I can appreciate the flow of data and I understand that flow of data. That allows me to short-circuit the debugging process and go straight to the source of the problem. As the application grows, the heaviness of the framework coding becomes less of an issue. I can see areas for improvement in the React/Flux code I wrote – some boilerplate that can be kicked out into classes all of their own. I like what I have here. Mobile is another area that I am increasingly becoming interested in. With React, there is React Native – a method of using React code within an iOS application that compiles to a native iOS app. React is also much more usable in an Apache Cordova app – allowing Android and Windows Phone coverage as well. Angular works well with Apache Cordova (see the Ionic Framework). Aurelia does not work well with mobile apps yet. Polymer is “just another component”, but the polyfills that are necessary to make Polymer work on Apache Cordova are heavier than I expected. Overall, I’ll continue watching React with great interest. In my next post, I’ll cover authentication with my favorite authentication service – Auth0.
https://shellmonger.com/2015/08/17/building-an-es6jsxreact-flux-app-part-2-the-flux/
CC-MAIN-2017-13
refinedweb
2,995
60.11
I created a VS2017 solution with 2 projects: Client and Manager following Request-Reply pattern and the 2 projects work well just until another project is added, called Services. All 3 of them are executable (.exe) The Services is created for another Request-Reply communication with the Manager. Just when Services.hpp is written with: #include "ndds_cpp.h" The following build error appeared: C2440 'initializing': cannot convert from 'initializer list' to 'DDS_LoggingQosPolicy' Services CorePolicyAdapter.hpp 165 It looks like you're including both the traditional C++ API (ndds_cpp.h) and the modern C++ API (CorePolicyAdapter.hpp). These two cannot be included in the same translation unit. More info: Thank you alexc, you are right. After changing to <dds/dds.hpp> as the link suggested, things are good to go now. You saved my day, man :)
https://community.rti.com/forum-topic/c2440-error-corepolicyadapterhpp
CC-MAIN-2022-33
refinedweb
135
61.73
. added, added, busch, fixed, mccandless, mccandless, michael, mike, mike, schindler, the, this, this, uwe added, added, busch, fixed, mccandless, mccandless, michael, mike, mike, schindler, the, this, this, uwe Lucene Change Log For more information on past and future Lucene versions, please see: ======================= Lucene 3.3.0 ======================= Changes in backwards compatibility policy * LUCENE-3140: IndexOutput.copyBytes now takes a DataInput (superclass of IndexInput) as its first argument. (Robert Muir, Dawid Weiss, Mike McCandless) * LUCENE-3191: FieldComparator.value now returns an Object not Comparable; FieldDoc.fields also changed from Comparable[] to Object[] (Uwe Schindler, Mike McCandless) * LUCENE-3208: Made deprecated methods Query.weight(Searcher) and Searcher.createWeight() final to prevent override. If you have overridden one of these methods, cut over to the non-deprecated implementation. (Uwe Schindler, Robert Muir, Yonik Seeley) * LUCENE-3238: Made MultiTermQuery.rewrite() final, to prevent problems (such as not properly setting rewrite methods, or not working correctly with things like SpanMultiTermQueryWrapper). To rewrite to a simpler form, instead return a simpler enum from getEnum(IndexReader). For example, to rewrite to a single term, return a SingleTermEnum. (ludovic Boutros, Uwe Schindler, Robert Muir) Changes in runtime behavior * LUCENE-2834: the hash used to compute the lock file name when the lock file is not stored in the index has changed. This means you will see a different lucene-XXX-write.lock in your lock directory. (Robert Muir, Uwe Schindler, Mike McCandless) * LUCENE-3146: IndexReader.setNorm throws IllegalStateException if the field does not store norms. (Shai Erera, Mike McCandless) * LUCENE-3198: On Linux, if the JRE is 64 bit and supports unmapping, FSDirectory.open now defaults to MMapDirectory instead of NIOFSDirectory since MMapDirectory gives better performance. (Mike McCandless) * LUCENE-3200: MMapDirectory now uses chunk sizes that are powers of 2. When setting the chunk size, it is rounded down to the next possible value. The new default value for 64 bit platforms is 2^30 (1 GiB), for 32 bit platforms it stays unchanged at 2^28 (256 MiB). Internally, MMapDirectory now only uses one dedicated final IndexInput implementation supporting multiple chunks, which makes Hotspot's life easier. (Uwe Schindler, Robert Muir, Mike McCandless) Bug fixes * LUCENE-3147,LUCENE-3152: Fixed open file handles leaks in many places in the code. Now MockDirectoryWrapper (in test-framework) tracks all open files, including locks, and fails if the test fails to release all of them. (Mike McCandless, Robert Muir, Shai Erera, Simon Willnauer) * LUCENE-3102: CachingCollector.replay was failing to call setScorer per-segment (Martijn van Groningen via Mike McCandless) * LUCENE-3183: Fix rare corner case where seeking to empty term (field="", term="") with terms index interval 1 could hit ArrayIndexOutOfBoundsException (selckin, Robert Muir, Mike McCandless) * LUCENE-3208: IndexSearcher had its own private similarity field and corresponding get/setter overriding Searcher's implementation. If you setted a different Similarity instance on IndexSearcher, methods implemented in the superclass Searcher were not using it, leading to strange bugs. (Uwe Schindler, Robert Muir) * LUCENE-3197: Fix core merge policies to not over-merge during background optimize when documents are still being deleted concurrently with the optimize (Mike McCandless) * LUCENE-3222: The RAM accounting for buffered delete terms was failing to measure the space required to hold the term's field and text character data. (Mike McCandless) * LUCENE-3238: Fixed bug where using WildcardQuery("prefix*") inside of a SpanMultiTermQueryWrapper rewrote incorrectly and returned an error instead. (ludovic Boutros, Uwe Schindler, Robert Muir) API Changes * LUCENE-3208: Renamed protected IndexSearcher.createWeight() to expert public method IndexSearcher.createNormalizedWeight() as this better describes what this method does. The old method is still there for backwards compatibility. Query.weight() was deprecated and simply delegates to IndexSearcher. Both deprecated methods will be removed in Lucene 4.0. (Uwe Schindler, Robert Muir, Yonik Seeley) * LUCENE-3197: MergePolicy.findMergesForOptimize now takes Map<SegmentInfo,Boolean> instead of Set as the second argument, so the merge policy knows which segments were originally present vs produced by an optimizing merge (Mike McCandless) Optimizations * LUCENE-1736: DateTools.java general improvements. (David Smiley via Steve Rowe) New Features * LUCENE-3140: Added experimental FST implementation to Lucene. (Robert Muir, Dawid Weiss, Mike McCandless) * LUCENE-3193: A new TwoPhaseCommitTool allows running a 2-phase commit algorithm over objects that implement the new TwoPhaseCommit interface (such as IndexWriter). (Shai Erera) * LUCENE-3191: Added TopDocs.merge, to facilitate merging results from different shards (Uwe Schindler, Mike McCandless) * LUCENE-3179: Added OpenBitSet.prevSetBit (Paul Elschot via Mike McCandless) * LUCENE-3210: Made TieredMergePolicy more aggressive in reclaiming segments with deletions; added new methods set/getReclaimDeletesWeight to control this. (Mike McCandless) Build * LUCENE-1344: Create OSGi bundle using dev-tools/maven. (Nicolas Lalevée, Luca Stancapiano via ryan) *) ======================= Lucene 3.2.0 ======================= Changes in backwards compatibility policy * LUCENE-2953: PriorityQueue's internal heap was made private, as subclassing with generics can lead to ClassCastException. For advanced use (e.g. in Solr) a method getHeapArray() was added to retrieve the internal heap array as a non-generic Object[]. (Uwe Schindler, Yonik Seeley) * LUCENE-1076: IndexWriter.setInfoStream now throws IOException (Mike McCandless, Shai Erera) * LUCENE-3084: MergePolicy.OneMerge.segments was changed from SegmentInfos to a List<SegmentInfo>. SegmentInfos itsself was changed to no longer extend Vector<SegmentInfo> (to update code that is using Vector-API, use the new asList() and asSet() methods returning unmodifiable collections; modifying SegmentInfos is now only possible through the explicitely declared methods). IndexWriter.segString() now takes Iterable<SegmentInfo> instead of List. A simple recompile should fix this. MergePolicy and SegmentInfos are internal/experimental APIs not covered by the strict backwards compatibility policy. (Uwe Schindler, Mike McCandless) Changes in runtime behavior * LUCENE-3065: When a NumericField is retrieved from a Document loaded from IndexReader (or IndexSearcher), it will now come back as NumericField not as a Field with a string-ified version of the numeric value you had indexed. Note that this only applies for newly-indexed Documents; older indices will still return Field with the string-ified numeric value. If you call Document.get(), the value comes still back as String, but Document.getFieldable() returns NumericField instances. (Uwe Schindler, Ryan McKinley, Mike McCandless) * LUCENE-1076: Changed the default merge policy from LogByteSizeMergePolicy to TieredMergePolicy, as of Version.LUCENE_32 (passed to IndexWriterConfig), which is able to merge non-contiguous segments. This means docIDs no longer necessarily stay "in order" during indexing. If this is a problem then you can use either of the LogMergePolicy impls. (Mike McCandless) New features * LUCENE-3082: Added index upgrade tool oal.index.IndexUpgrader that allows to upgrade all segments to last recent supported index format without fully optimizing. (Uwe Schindler, Mike McCandless) * LUCENE-1076: Added TieredMergePolicy which is able to merge non-contiguous segments, which means docIDs no longer necessarily stay "in order". (Mike McCandless, Shai Erera) * LUCENE-3071: Adding ReversePathHierarchyTokenizer, added skip parameter to PathHierarchyTokenizer (Olivier Favre via ryan) * LUCENE-1421, LUCENE-3102: added CachingCollector which allow you to cache document IDs and scores encountered during the search, and "replay" them to another Collector. (Mike McCandless, Shai Erera) * LUCENE-3112: Added experimental IndexWriter.add/updateDocuments, enabling a block of documents to be indexed, atomically, with guaranteed sequential docIDs. (Mike McCandless) API Changes * LUCENE-3061: IndexWriter's getNextMerge() and merge(OneMerge) are now public (though @lucene.experimental), allowing for custom MergeScheduler implementations. (Shai Erera) * LUCENE-3065: Document.getField() was deprecated, as it throws ClassCastException when loading lazy fields or NumericFields. (Uwe Schindler, Ryan McKinley, Mike McCandless) * LUCENE-2027: Directory.touchFile is deprecated and will be removed in 4.0. (Mike McCandless) Optimizations * LUCENE-2990: ArrayUtil/CollectionUtil.*Sort() methods now exit early on empty or one-element lists/arrays. (Uwe Schindler) * LUCENE-2897: Apply deleted terms while flushing a segment. We still buffer deleted terms to later apply to past segments. (Mike McCandless) * LUCENE-3126: IndexWriter.addIndexes copies incoming segments into CFS if they aren't already and MergePolicy allows that. (Shai Erera) Bug fixes * LUCENE-2996: addIndexes(IndexReader) did not flush before adding the new indexes, causing existing deletions to be applied on the incoming indexes as well. (Shai Erera, Mike McCandless) * LUCENE-3024: Index with more than 2.1B terms was hitting AIOOBE when seeking TermEnum (eg used by Solr's faceting) (Tom Burton-West, Mike McCandless) * LUCENE-3042: When a filter or consumer added Attributes to a TokenStream chain after it was already (partly) consumed [or clearAttributes(), captureState(), cloneAttributes(),... was called by the Tokenizer], the Tokenizer calling clearAttributes() or capturing state after addition may not do this on the newly added Attribute. This bug affected only very special use cases of the TokenStream-API, most users would not have recognized it. (Uwe Schindler, Robert Muir) * LUCENE-3054: PhraseQuery can in some cases stack overflow in SorterTemplate.quickSort(). This fix also adds an optimization to PhraseQuery as term with lower doc freq will also have less positions. (Uwe Schindler, Robert Muir, Otis Gospodnetic) * LUCENE-3068: sloppy phrase query failed to match valid documents when multiple query terms had same position in the query. (Doron Cohen) * LUCENE-3012: Lucene writes the header now for separate norm files (*.sNNN) (Robert Muir) Build * LUCENE-3006: Building javadocs will fail on warnings by default. Override with -Dfailonjavadocwarning=false (sarowe, gsingers) * LUCENE-3128: "ant eclipse" creates a .project file for easier Eclipse integration (unless one already exists). (Daniel Serodio via Shai Erera) Test Cases * LUCENE-3002: added 'tests.iter.min' to control 'tests.iter' by allowing to stop iterating if at least 'tests.iter.min' ran and a failure occured. (Shai Erera, Chris Hostetter) ======================= Lucene 3.1.0 ======================= Changes in backwards compatibility policy * LUCENE-2719: Changed API of internal utility class org.apache.lucene.util.SorterTemplate to support faster quickSort using pivot values and also merge sort and insertion sort. If you have used this class, you have to implement two more methods for handling pivots. (Uwe Schindler, Robert Muir, Mike McCandless) * LUCENE-1923: Renamed SegmentInfo & SegmentInfos segString method to toString. These are advanced APIs and subject to change suddenly. (Tim Smith via Mike McCandless) * LUCENE-2190: Removed deprecated customScore() and customExplain() methods from experimental CustomScoreQuery. (Uwe Schindler) * LUCENE-2286: Enabled DefaultSimilarity.setDiscountOverlaps by default. This means that terms with a position increment gap of zero do not affect the norms calculation by default. (Robert Muir) * LUCENE-2320: MergePolicy.writer is now of type SetOnce, which allows setting the IndexWriter for a MergePolicy exactly once. You can change references to 'writer' from <code>writer.doXYZ() to writer.get().doXYZ() (it is also advisable to add an <code>assert writer != null; before you access the wrapped IndexWriter.) In addition, MergePolicy only exposes a default constructor, and the one that took IndexWriter as argument has been removed from all MergePolicy extensions. (Shai Erera via Mike McCandless) * LUCENE-2328: SimpleFSDirectory.SimpleFSIndexInput is moved to FSDirectory.FSIndexInput. Anyone extending this class will have to fix their code on upgrading. (Earwin Burrfoot via Mike McCandless) * LUCENE-2302: The new interface for term attributes, CharTermAttribute, now implements CharSequence. This requires the toString() methods of CharTermAttribute, deprecated TermAttribute, and Token to return only the term text and no other attribute contents. LUCENE-2374 implements an attribute reflection API to no longer rely on toString() for attribute inspection. (Uwe Schindler, Robert Muir) * LUCENE-2372, LUCENE-2389: StandardAnalyzer, KeywordAnalyzer, PerFieldAnalyzerWrapper, WhitespaceTokenizer are now final. Also removed the now obsolete and deprecated Analyzer.setOverridesTokenStreamMethod(). Analyzer and TokenStream base classes now have an assertion in their ctor, that check subclasses to be final or at least have final implementations of incrementToken(), tokenStream(), and reusableTokenStream(). (Uwe Schindler, Robert Muir) * LUCENE-2316: Directory.fileLength contract was clarified - it returns the actual file's length if the file exists, and throws FileNotFoundException otherwise. Returning length=0 for a non-existent file is no longer allowed. If you relied on that, make sure to catch the exception. (Shai Erera) * LUCENE-2386: IndexWriter no longer performs an empty commit upon new index creation. Previously, if you passed an empty Directory and set OpenMode to CREATE*, IndexWriter would make a first empty commit. If you need that behavior you can call writer.commit()/close() immediately after you create it. (Shai Erera, Mike McCandless) * LUCENE-2733: Removed public constructors of utility classes with only static methods to prevent instantiation. (Uwe Schindler) * LUCENE-2602: The default (LogByteSizeMergePolicy) merge policy now takes deletions into account by default. You can disable this by calling setCalibrateSizeByDeletes(false) on the merge policy. (Mike McCandless) * LUCENE-2529, LUCENE-2668: Position increment gap and offset gap of empty values in multi-valued field has been changed for some cases in index. If you index empty fields and uses positions/offsets information on that fields, reindex is recommended. (David Smiley, Koji Sekiguchi) * LUCENE-2804: Directory.setLockFactory new declares throwing an IOException. (Shai Erera, Robert Muir) * LUCENE-2837: Added deprecations noting that in 4.0, Searcher and Searchable are collapsed into IndexSearcher; contrib/remote and MultiSearcher have been removed. (Mike McCandless) * LUCENE-2854: Deprecated SimilarityDelegator and Similarity.lengthNorm; the latter is now final, forcing any custom Similarity impls to cutover to the more general computeNorm (Robert Muir, Mike McCandless) * LUCENE-2869: Deprecated Query.getSimilarity: instead of using "runtime" subclassing/delegation, subclass the Weight instead. (Robert Muir) * LUCENE-2674: A new idfExplain method was added to Similarity, that accepts an incoming docFreq. If you subclass Similarity, make sure you also override this method on upgrade. (Robert Muir, Mike McCandless) Changes in runtime behavior * LUCENE-1923: Made IndexReader.toString() produce something meaningful (Tim Smith via Mike McCandless) * LUCENE-2179: CharArraySet.clear() is now functional. (Robert Muir, Uwe Schindler) * LUCENE-2455: IndexWriter.addIndexes no longer optimizes the target index before it adds the new ones. Also, the existing segments are not merged and so the index will not end up with a single segment (unless it was empty before). In addition, addIndexesNoOptimize was renamed to addIndexes and no longer) * LUCENE-2663: IndexWriter no longer forcefully clears any existing locks when create=true. This was a holdover from when SimpleFSLockFactory was the default locking implementation, and, even then it was dangerous since it could mask bugs in IndexWriter's usage, allowing applications to accidentally open two writers on the same directory. (Mike McCandless) * LUCENE-2701: maxMergeMBForOptimize and maxMergeDocs constraints set on LogMergePolicy now affect optimize() as well (as opposed to only regular merges). This means that you can run optimize() and too large segments won't be merged. (Shai Erera) * LUCENE-2753: IndexReader and DirectoryReader .listCommits() now return a List, guaranteeing the commits are sorted from oldest to latest. (Shai Erera) * LUCENE-2785: TopScoreDocCollector, TopFieldCollector and the IndexSearcher search methods that take an int nDocs will now throw IllegalArgumentException if nDocs is 0. Instead, you should use the newly added TotalHitCountCollector. (Mike McCandless) * LUCENE-2790: LogMergePolicy.useCompoundFile's logic now factors in noCFSRatio to determine whether the passed in segment should be compound. (Shai Erera, Earwin Burrfoot) * LUCENE-2805: IndexWriter now increments the index version on every change to the index instead of for every commit. Committing or closing the IndexWriter without any changes to the index will not cause any index version increment. (Simon Willnauer, Mike McCandless) * LUCENE-2650, LUCENE-2825: The behavior of FSDirectory.open has changed. On 64-bit Windows and Solaris systems that support unmapping, FSDirectory.open returns MMapDirectory. Additionally the behavior of MMapDirectory has been changed to enable unmapping by default if supported by the JRE. (Mike McCandless, Uwe Schindler, Robert Muir) * LUCENE-2829: Improve the performance of "primary key" lookup use case (running a TermQuery that matches one document) on a multi-segment index. (Robert Muir, Mike McCandless) * LUCENE-2010: Segments with 100% deleted documents are now removed on IndexReader or IndexWriter commit. (Uwe Schindler, Mike McCandless) * LUCENE-2960: Allow some changes to IndexWriterConfig to take effect "live" (after an IW is instantiated), via IndexWriter.getConfig().setXXX(...) (Shay Banon, Mike McCandless) API Changes * LUCENE-2076: Rename FSDirectory.getFile -> getDirectory. (George Aroush via Mike McCandless) * LUCENE-1260: Change norm encode (float->byte) and decode (byte->float) to be instance methods not static methods. This way a custom Similarity can alter how norms are encoded, though they must still be encoded as a single byte (Johan Kindgren via Mike McCandless) * LUCENE-2103: NoLockFactory should have a private constructor; until Lucene 4.0 the default one will be deprecated. (Shai Erera via Uwe Schindler) * LUCENE-2177: Deprecate the Field ctors that take byte[] and Store. Since the removal of compressed fields, Store can only be YES, so it's not necessary to specify. (Erik Hatcher via Mike McCandless) * LUCENE-2200: Several final classes had non-overriding protected members. These were converted to private and unused protected constructors removed. (Steven Rowe via Robert Muir) * LUCENE-2240: SimpleAnalyzer and WhitespaceAnalyzer now have Version ctors. (Simon Willnauer via Uwe Schindler) * LUCENE-2259: Add IndexWriter.deleteUnusedFiles, to attempt removing unused files. This is only useful on Windows, which prevents deletion of open files. IndexWriter will eventually remove these files itself; this method just lets you do so when you know the files are no longer open by IndexReaders. (luocanrao via Mike McCandless) * LUCENE-2282: IndexFileNames is exposed as a public class allowing for easier use by external code. In addition it offers a matchExtension method which callers can use to query whether a certain file matches a certain extension. (Shai Erera via Mike McCandless) * Muir) * LUCENE-2015: Add a static method foldToASCII to ASCIIFoldingFilter to expose its folding logic. (Cédrik Lime via Robert Muir) * LUCENE-2294: IndexWriter constructors have been deprecated in favor of a single ctor which accepts IndexWriterConfig and a Directory. You can set all the parameters related to IndexWriter on IndexWriterConfig. The different setter/getter methods were deprecated as well. One should call writer.getConfig().getXYZ() to query for a parameter XYZ. Additionally, the setter/getter related to MergePolicy were deprecated as well. One should interact with the MergePolicy directly. (Shai Erera via Mike McCandless) * LUCENE-2320: IndexWriter's MergePolicy configuration was moved to IndexWriterConfig and the respective methods on IndexWriter were deprecated. (Shai Erera via Mike McCandless) * LUCENE-2328: Directory now keeps track itself of the files that are written but not yet fsynced. The old Directory.sync(String file) method is deprecated and replaced with Directory.sync(Collection<String> files). Take a look at FSDirectory to see a sample of how such tracking might look like, if needed in your custom Directories. (Earwin Burrfoot via Mike McCandless) * LUCENE-2302: Deprecated TermAttribute and replaced by a new CharTermAttribute. The change is backwards compatible, so mixed new/old TokenStreams all work on the same char[] buffer independent of which interface they use. CharTermAttribute has shorter method names and implements CharSequence and Appendable. This allows usage like Java's StringBuilder in addition to direct char[] access. Also terms can directly be used in places where CharSequence is allowed (e.g. regular expressions). (Uwe Schindler, Robert Muir) * LUCENE-2402: IndexWriter.deleteUnusedFiles now deletes unreferenced commit points too. If you use an IndexDeletionPolicy which holds onto index commits (such as SnapshotDeletionPolicy), you can call this method to remove those commit points when they are not needed anymore (instead of waiting for the next commit). (Shai Erera) * LUCENE-2481: SnapshotDeletionPolicy.snapshot() and release() were replaced with equivalent ones that take a String (id) as argument. You can pass whatever ID you want, as long as you use the same one when calling both. (Shai Erera) * LUCENE-2356: Add IndexWriterConfig.set/getReaderTermIndexDivisor, to set what IndexWriter passes for termsIndexDivisor to the readers it opens internally when apply deletions or creating a near-real-time reader. (Earwin Burrfoot via Mike McCandless) * LUCENE-2167,LUCENE-2699,LUCENE-2763,LUCENE-2847: StandardTokenizer/Analyzer in common/standard/ now implement the Word Break rules from the Unicode 6.0.0 Text Segmentation algorithm (UAX#29), covering the full range of Unicode code points, including values from U+FFFF to U+10FFFF ClassicTokenizer/Analyzer retains the old (pre-Lucene 3.1) StandardTokenizer/ Analyzer implementation and behavior. Only the Unicode Basic Multilingual Plane (code points from U+0000 to U+FFFF) is covered. UAX29URLEmailTokenizer tokenizes URLs and E-mail addresses according to the relevant RFCs, in addition to implementing the UAX#29 Word Break rules. (Steven Rowe, Robert Muir, Uwe Schindler) * LUCENE-2778: RAMDirectory now exposes newRAMFile() which allows to override and return a different RAMFile implementation. (Shai Erera) * LUCENE-2785: Added TotalHitCountCollector whose sole purpose is to count the number of hits matching the query. (Mike McCandless) * LUCENE-2846: Deprecated IndexReader.setNorm(int, String, float). This method is only syntactic sugar for setNorm(int, String, byte), but using the global Similarity.getDefault().encodeNormValue(). Use the byte-based method instead to ensure that the norm is encoded with your Similarity. (Robert Muir, Mike McCandless) * LUCENE-2374: Added Attribute reflection API: It's now possible to inspect the contents of AttributeImpl and AttributeSource using a well-defined API. This is e.g. used by Solr's AnalysisRequestHandlers to display all attributes in a structured way. There are also some backwards incompatible changes in toString() output, as LUCENE-2302 introduced the CharSequence interface to CharTermAttribute leading to changed toString() return values. The new API allows to get a string representation in a well-defined way using a new method reflectAsString(). For backwards compatibility reasons, when toString() was implemented by implementation subclasses, the default implementation of AttributeImpl.reflectWith() uses toString()s output instead to report the Attribute's properties. Otherwise, reflectWith() uses Java's reflection (like toString() did before) to get the attribute properties. In addition, the mandatory equals() and hashCode() are no longer required for AttributeImpls, but can still be provided (if needed). (Uwe Schindler) * LUCENE-2691: Deprecate IndexWriter.getReader in favor of IndexReader.open(IndexWriter) (Grant Ingersoll, Mike McCandless) * LUCENE-2876: Deprecated Scorer.getSimilarity(). If your Scorer uses a Similarity, it should keep it itself. Fixed Scorers to pass their parent Weight, so that Scorer.visitSubScorers (LUCENE-2590) will work correctly. (Robert Muir, Doron Cohen) * LUCENE-2900: When opening a near-real-time (NRT) reader (IndexReader.re/open(IndexWriter)) you can now specify whether deletes should be applied. Applying deletes can be costly, and some expert use cases can handle seeing deleted documents returned. The deletes remain buffered so that the next time you open an NRT reader and pass true, all deletes will be a applied. (Mike McCandless) * LUCENE-1253: LengthFilter (and Solr's KeepWordTokenFilter) now require up front specification of enablePositionIncrement. Together with StopFilter they have a common base class (FilteringTokenFilter) that handles the position increments automatically. Implementors only need to override an accept() method that filters tokens. (Uwe Schindler, Robert Muir) Bug fixes * LUCENE-2249: ParallelMultiSearcher should shut down thread pool on close. (Martin Traverso via Uwe Schindler) * LUCENE-2273: FieldCacheImpl.getCacheEntries() used WeakHashMap incorrectly and lead to ConcurrentModificationException. (Uwe Schindler, Robert Muir) * LUCENE-2328: Index files fsync tracking moved from IndexWriter/IndexReader to Directory, and it no longer leaks memory. (Earwin Burrfoot via Mike McCandless) * LUCENE-2074: Reduce buffer size of lexer back to default on reset. (Ruben Laguna, Shai Erera via Uwe Schindler) * LUCENE-2496: Don't throw NPE if IndexWriter is opened with CREATE on a prior (corrupt) index missing its segments_N file. (Mike McCandless) * LUCENE-2458: QueryParser no longer automatically forms phrase queries, assuming whitespace tokenization. Previously all CJK queries, for example, would be turned into phrase queries. The old behavior is preserved with the matchVersion parameter for previous versions. Additionally, you can explicitly enable the old behavior with setAutoGeneratePhraseQueries(true) (Robert Muir) * LUCENE-2537: FSDirectory.copy() implementation was unsafe and could result in OOM if a large file was copied. (Shai Erera) * LUCENE-2580: MultiPhraseQuery throws AIOOBE if number of positions exceeds number of terms at one position (Jayendra Patil via Mike McCandless) * LUCENE-2617: Optional clauses of a BooleanQuery were not factored into coord if the scorer for that segment returned null. This can cause the same document to score to differently depending on what segment it resides in. (yonik) * LUCENE-2272: Fix explain in PayloadNearQuery and also fix scoring issue (Peter Keegan via Grant Ingersoll) * LUCENE-2732: Fix charset problems in XML loading in HyphenationCompoundWordTokenFilter. (Uwe Schindler) * LUCENE-2802: NRT DirectoryReader returned incorrect values from getVersion, isOptimized, getCommitUserData, getIndexCommit and isCurrent due to a mutable reference to the IndexWriters SegmentInfos. (Simon Willnauer, Earwin Burrfoot) * LUCENE-2852: Fixed corner case in RAMInputStream that would hit a false EOF after seeking to EOF then seeking back to same block you were just in and then calling readBytes (Robert Muir, Mike McCandless) * LUCENE-2860: Fixed SegmentInfo.sizeInBytes to factor includeDocStores when it decides whether to return the cached computed size or not. (Shai Erera) * LUCENE-2584: SegmentInfo.files() could hit ConcurrentModificationException if called by multiple threads. (Alexander Kanarsky via Shai Erera) * LUCENE-2809: Fixed IndexWriter.numDocs to take into account applied but not yet flushed deletes. (Mike McCandless) * LUCENE-2879: MultiPhraseQuery previously calculated its phrase IDF by summing internally, it now calls Similarity.idfExplain(Collection, IndexSearcher). (Robert Muir) * LUCENE-2693: RAM used by IndexWriter was slightly incorrectly computed. (Jason Rutherglen via Shai Erera) * LUCENE-1846: DateTools now uses the US locale everywhere, so DateTools.round() is safe also in strange locales. (Uwe Schindler) * LUCENE-2891: IndexWriterConfig did not accept -1 in setReaderTermIndexDivisor, which can be used to prevent loading the terms index into memory. (Shai Erera) * LUCENE-2937: Encoding a float into a byte (e.g. encoding field norms during indexing) had an underflow detection bug that caused floatToByte(f)==0 where f was greater than 0, but slightly less than byteToFloat(1). This meant that certain very small field norms (index_boost * length_norm) could have been rounded down to 0 instead of being rounded up to the smallest positive number. (yonik) * LUCENE-2936: PhraseQuery score explanations were not correctly identifying matches vs non-matches. (hossman) * LUCENE-2975: A hotspot bug corrupts IndexInput#readVInt()/readVLong() if the underlying readByte() is inlined (which happens e.g. in MMapDirectory). The loop was unwinded which makes the hotspot bug disappear. (Uwe Schindler, Robert Muir, Mike McCandless) New features * LUCENE-2128: Parallelized fetching document frequencies during weight creation. (Israel Tsadok, Simon Willnauer via Uwe Schindler) * LUCENE-2069: Added Unicode 4 support to CharArraySet. Due to the switch to Java 5, supplementary characters are now lowercased correctly if the set is created as case insensitive. CharArraySet now requires a Version argument to preserve backwards compatibility. If Version < 3.1 is passed to the constructor, CharArraySet yields the old behavior. (Simon Willnauer) * LUCENE-2069: Added Unicode 4 support to LowerCaseFilter. Due to the switch to Java 5, supplementary characters are now lowercased correctly. LowerCaseFilter now requires a Version argument to preserve backwards compatibility. If Version < 3.1 is passed to the constructor, LowerCaseFilter yields the old behavior. (Simon Willnauer, Robert Muir) * LUCENE-2034: Added ReusableAnalyzerBase, an abstract subclass of Analyzer that makes it easier to reuse TokenStreams correctly. This issue also added StopwordAnalyzerBase, which improves consistency of all Analyzers that use stopwords, and implement many analyzers in contrib with it. (Simon Willnauer via Robert Muir) * LUCENE-2198, LUCENE-2901: Support protected words in stemming TokenFilters using a new KeywordAttribute. (Simon Willnauer, Drew Farris via Uwe Schindler) * LUCENE-2183, LUCENE-2240, LUCENE-2241: Added Unicode 4 support to CharTokenizer and its subclasses. CharTokenizer now has new int-API which is conditionally preferred to the old char-API depending on the provided Version. Version < 3.1 will use the char-API. (Simon Willnauer via Uwe Schindler) * LUCENE-2247: Added a CharArrayMap<V> for performance improvements in some stemmers and synonym filters. (Uwe Schindler) * LUCENE-2320: Added SetOnce which wraps an object and allows it to be set exactly once. (Shai Erera via Mike McCandless) * LUCENE-2314: Added AttributeSource.copyTo(AttributeSource) that allows to use cloneAttributes() and this method as a replacement for captureState()/restoreState(), if the state itself needs to be inspected/modified. (Uwe Schindler) * LUCENE-2293: Expose control over max number of threads that IndexWriter will allow to run concurrently while indexing documents (previously this was hardwired to 5), using IndexWriterConfig.setMaxThreadStates. (Mike McCandless) * LUCENE-2297: Enable turning on reader pooling inside IndexWriter even when getReader (near-real-timer reader) is not in use, through IndexWriterConfig.enable/disableReaderPooling. (Mike McCandless) * LUCENE-2331: Add NoMergePolicy which never returns any merges to execute. In addition, add NoMergeScheduler which never executes any merges. These two are convenient classes in case you want to disable segment merges by IndexWriter without tweaking a particular MergePolicy parameters, such as mergeFactor. MergeScheduler's methods are now public. (Shai Erera via Mike McCandless) * LUCENE-2339: Deprecate static method Directory.copy in favor of Directory.copyTo, and use nio's FileChannel.transferTo when copying files between FSDirectory instances. (Earwin Burrfoot via Mike McCandless). * LUCENE-2074: Make StandardTokenizer fit for Unicode 4.0, if the matchVersion parameter is Version.LUCENE_31. (Uwe Schindler) * LUCENE-2385: Moved NoDeletionPolicy from benchmark to core. NoDeletionPolicy can be used to prevent commits from ever getting deleted from the index. (Shai Erera) * LUCENE-1585: IndexWriter now accepts a PayloadProcessorProvider which can return a DirPayloadProcessor for a given Directory, which returns a PayloadProcessor for a given Term. The PayloadProcessor will be used to process the payloads of the segments as they are merged (e.g. if one wants to rewrite payloads of external indexes as they are added, or of local ones). (Shai Erera, Michael Busch, Mike McCandless) * LUCENE-2440: Add support for custom ExecutorService in ParallelMultiSearcher (Edward Drapkin via Mike McCandless) * LUCENE-2295: Added a LimitTokenCountAnalyzer / LimitTokenCountFilter to wrap any other Analyzer and provide the same functionality as MaxFieldLength provided on IndexWriter. This patch also fixes a bug in the offset calculation in CharTokenizer. (Uwe Schindler, Shai Erera) * LUCENE-2526: Don't throw NPE from MultiPhraseQuery.toString when it's empty. (Ross Woolf via Mike McCandless) * LUCENE-2559: Added SegmentReader.reopen methods (John Wang via Mike McCandless) * LUCENE-2590: Added Scorer.visitSubScorers, and Scorer.freq. Along with a custom Collector these experimental methods make it possible to gather the hit-count per sub-clause and per document while a search is running. (Simon Willnauer, Mike McCandless) * LUCENE-2636: Added MultiCollector which allows running the search with several Collectors. (Shai Erera) * LUCENE-2754, LUCENE-2757: Added a wrapper around MultiTermQueries to add span support: SpanMultiTermQueryWrapper<Q extends MultiTermQuery>. Using this wrapper its easy to add fuzzy/wildcard to e.g. a SpanNearQuery. (Robert Muir, Uwe Schindler) * LUCENE-2838: ConstantScoreQuery now directly supports wrapping a Query instance for stripping off scores. The use of a QueryWrapperFilter is no longer needed and discouraged for that use case. Directly wrapping Query improves performance, as out-of-order collection is now supported. (Uwe Schindler) * LUCENE-2864: Add getMaxTermFrequency (maximum within-document TF) to FieldInvertState so that it can be used in Similarity.computeNorm. (Robert Muir) * LUCENE-2720: Segments now record the code version which created them. (Shai Erera, Mike McCandless, Uwe Schindler) * LUCENE-2474: Added expert ReaderFinishedListener API to IndexReader, to allow apps that maintain external per-segment caches to evict entries when a segment is finished. (Shay Banon, Yonik Seeley, Mike McCandless) * LUCENE-2911: The new StandardTokenizer, UAX29URLEmailTokenizer, and the ICUTokenizer in contrib now all tag types with a consistent set of token types (defined in StandardTokenizer). Tokens in the major CJK types are explicitly marked to allow for custom downstream handling: <IDEOGRAPHIC>, , , and . (Robert Muir, Steven Rowe) * LUCENE-2913: Add missing getters to Numeric* classes. (Uwe Schindler) * LUCENE-1810: Added FieldSelectorResult.LATENT to not cache lazy loaded fields (Tim Smith, Grant Ingersoll) * LUCENE-2692: Added several new SpanQuery classes for positional checking (match is in a range, payload is a specific value) (Grant Ingersoll) Optimizations * LUCENE-2494: Use CompletionService in ParallelMultiSearcher instead of simple polling for results. (Edward Drapkin, Simon Willnauer) * LUCENE-2075: Terms dict cache is now shared across threads instead of being stored separately in thread local storage. Also fixed terms dict so that the cache is used when seeking the thread local term enum, which will be important for MultiTermQuery impls that do lots of seeking (Mike McCandless, Uwe Schindler, Robert Muir, Yonik Seeley) * LUCENE-2136: If the multi reader (DirectoryReader or MultiReader) only has a single sub-reader, delegate all enum requests to it. This avoid the overhead of using a PQ unnecessarily. (Mike McCandless) * LUCENE-2137: Switch to AtomicInteger for some ref counting (Earwin Burrfoot via Mike McCandless) * LUCENE-2123, LUCENE-2261: Move FuzzyQuery rewrite to separate RewriteMode into MultiTermQuery. The number of fuzzy expansions can be specified with the maxExpansions parameter to FuzzyQuery. (Uwe Schindler, Robert Muir, Mike McCandless) * LUCENE-2164: ConcurrentMergeScheduler has more control over merge threads. First, it gives smaller merges higher thread priority than larges ones. Second, a new set/getMaxMergeCount setting will pause the larger merges to allow smaller ones to finish. The defaults for these settings are now dynamic, depending the number CPU cores as reported by Runtime.getRuntime().availableProcessors() (Mike McCandless) * LUCENE-2169: Improved CharArraySet.copy(), if source set is also a CharArraySet. (Simon Willnauer via Uwe Schindler) * LUCENE-2084: Change IndexableBinaryStringTools to work on byte[] and char[] directly, instead of Byte/CharBuffers, and modify CollationKeyFilter to take advantage of this for faster performance. (Steven Rowe, Uwe Schindler, Robert Muir) * LUCENE-2188: Add a utility class for tracking deprecated overridden methods in non-final subclasses. (Uwe Schindler, Robert Muir) * LUCENE-2195: Speedup CharArraySet if set is empty. (Simon Willnauer via Robert Muir) * LUCENE-2285: Code cleanup. (Shai Erera via Uwe Schindler) * LUCENE-2303: Remove code duplication in Token class by subclassing TermAttributeImpl, move DEFAULT_TYPE constant to TypeInterface, improve null-handling for TypeAttribute. (Uwe Schindler) * LUCENE-2329: Switch TermsHash* from using a PostingList object per unique term to parallel arrays, indexed by termID. This reduces garbage collection overhead significantly, which results in great indexing performance wins when the available JVM heap space is low. This will become even more important when the DocumentsWriter RAM buffer is searchable in the future, because then it will make sense to make the RAM buffers as large as possible. (Mike McCandless, Michael Busch) * LUCENE-2380: The terms field cache methods (getTerms, getTermsIndex), which replace the older String equivalents (getStrings, getStringIndex), consume quite a bit less RAM in most cases. (Mike McCandless) * LUCENE-2410: ~20% speedup on exact (slop=0) PhraseQuery matching. (Mike McCandless) * LUCENE-2531: Fix issue when sorting by a String field that was causing too many fallbacks to compare-by-value (instead of by-ord). (Mike McCandless) * LUCENE-2574: IndexInput exposes copyBytes(IndexOutput, long) to allow for efficient copying by sub-classes. Optimized copy is implemented for RAM and FS streams. (Shai Erera) * LUCENE-2719: Improved TermsHashPerField's sorting to use a better quick sort algorithm that dereferences the pivot element not on every compare call. Also replaced lots of sorting code in Lucene by the improved SorterTemplate class. (Uwe Schindler, Robert Muir, Mike McCandless) * LUCENE-2760: Optimize SpanFirstQuery and SpanPositionRangeQuery. (Robert Muir) * LUCENE-2770: Make SegmentMerger always work on atomic subreaders, even when IndexWriter.addIndexes(IndexReader...) is used with DirectoryReaders or other MultiReaders. This saves lots of memory during merge of norms. (Uwe Schindler, Mike McCandless) * LUCENE-2824: Optimize BufferedIndexInput to do less bounds checks. (Robert Muir) * LUCENE-2010: Segments with 100% deleted documents are now removed on IndexReader or IndexWriter commit. (Uwe Schindler, Mike McCandless) * LUCENE-1472: Removed synchronization from static DateTools methods by using a ThreadLocal. Also converted DateTools.Resolution to a Java 5 enum (this should not break backwards). (Uwe Schindler) Build * LUCENE-2124: Moved the JDK-based collation support from contrib/collation into core, and moved the ICU-based collation support into contrib/icu. (Robert Muir) * LUCENE-2326: Removed SVN checkouts for backwards tests. The backwards branch is now included in the svn repository using "svn copy" after release. (Uwe Schindler) * LUCENE-2074: Regenerating StandardTokenizerImpl files now needs JFlex 1.5 (currently only available on SVN). (Uwe Schindler) * LUCENE-1709: Tests are now parallelized by default (except for benchmark). You can force them to run sequentially by passing -Drunsequential=1 on the command line. The number of threads that are spawned per CPU defaults to '1'. If you wish to change that, you can run the tests with -DthreadsPerProcessor=[num]. (Robert Muir, Shai Erera, Peter Kofler) * LUCENE-2516: Backwards tests are now compiled against released lucene-core.jar from tarball of previous version. Backwards tests are now packaged together with src distribution. (Uwe Schindler) * LUCENE-2611: Added Ant target to install IntelliJ IDEA configuration: "ant idea". See (Steven Rowe) * LUCENE-2657: Switch from using Maven POM templates to full POMs when generating Maven artifacts (Steven Rowe) * LUCENE-2609: Added jar-test-framework Ant target which packages Lucene's tests' framework classes. (Drew Farris, Grant Ingersoll, Shai Erera, Steven Rowe) Test Cases * LUCENE-2037 Allow Junit4 tests in our environment (Erick Erickson via Mike McCandless) * LUCENE-1844: Speed up the unit tests (Mark Miller, Erick Erickson, Mike McCandless) * LUCENE-2065: Use Java 5 generics throughout our unit tests. (Kay Kay via Mike McCandless) * LUCENE-2155: Fix time and zone dependent localization test failures in queryparser tests. (Uwe Schindler, Chris Male, Robert Muir) * LUCENE-2170: Fix thread starvation problems. (Uwe Schindler) * LUCENE-2248, LUCENE-2251, LUCENE-2285: Refactor tests to not use Version.LUCENE_CURRENT, but instead use a global static value from LuceneTestCase(J4), that contains the release version. (Uwe Schindler, Simon Willnauer, Shai Erera) * LUCENE-2313, LUCENE-2322: Add VERBOSE to LuceneTestCase(J4) to control verbosity of tests. If VERBOSE==false (default) tests should not print anything other than errors to System.(out|err). The setting can be changed with -Dtests.verbose=true on test invocation. (Shai Erera, Paul Elschot, Uwe Schindler) * LUCENE-2318: Remove inconsistent system property code for retrieving temp and data directories inside test cases. It is now centralized in LuceneTestCase(J4). Also changed lots of tests to use getClass().getResourceAsStream() to retrieve test data. Tests needing access to "real" files from the test folder itself, can use LuceneTestCase(J4).getDataFile(). (Uwe Schindler) * LUCENE-2398, LUCENE-2611: Improve tests to work better from IDEs such as Eclipse and IntelliJ. (Paolo Castagna, Steven Rowe via Robert Muir) * LUCENE-2804: add newFSDirectory to LuceneTestCase to create a FSDirectory at random. (Shai Erera, Robert Muir) Documentation * LUCENE-2579: Fix oal.search's package.html description of abstract methods. (Santiago M. Mola via Mike McCandless) * LUCENE-2625: Add a note to IndexReader.termDocs() with additional verbiage that the TermEnum must be seeked since it is unpositioned. (Adriano Crestani via Robert Muir) * LUCENE-2894: Use google-code-prettify for syntax highlighting in javadoc. (Shinichiro Abe, Koji Sekiguchi) ================== Release 2.9.4 / 3.0.3 ==================== Changes in runtime behavior * LUCENE-2689: NativeFSLockFactory no longer attempts to acquire a test lock just before the real lock is acquired. (Surinder Pal Singh Bindra via) Bug fixes * LUCENE-2142 (correct fix): FieldCacheImpl.getStringIndex no longer throws an exception when term count exceeds doc count. (Mike McCandless, Uwe Schindler) * LUCENE-2513: when opening writable IndexReader on a not-current commit, do not overwrite "future" commits. (Mike McCandless) * LUCENE-2536: IndexWriter.rollback was failing to properly rollback buffered deletions against segments that were flushed (Mark Harwood via Mike McCandless) * LUCENE-2541: Fixed NumericRangeQuery that returned incorrect results with endpoints near Long.MIN_VALUE and Long.MAX_VALUE: NumericUtils.splitRange() overflowed, if - the range contained a LOWER bound that was greater than (Long.MAX_VALUE - (1L << precisionStep)) - the range contained an UPPER bound that was less than (Long.MIN_VALUE + (1L << precisionStep)) With standard precision steps around 4, this had no effect on most queries, only those that met the above conditions. Queries with large precision steps failed more easy. Queries with precision step >=64 were not affected. Also 32 bit data types int and float were not affected. (Yonik Seeley, Uwe Schindler) * LUCENE-2593: Fixed certain rare cases where a disk full could lead to a corrupted index (Robert Muir, Mike McCandless) * LUCENE-2620: Fixed a bug in WildcardQuery where too many asterisks would result in unbearably slow performance. (Nick Barkas via Robert Muir) * LUCENE-2627: Fixed bug in MMapDirectory chunking when a file is an exact multiple of the chunk size. (Robert Muir) * LUCENE-2634: isCurrent on an NRT reader was failing to return false if the writer had just committed (Nikolay Zamosenchuk via Mike McCandless) * LUCENE-2650: Added extra safety to MMapIndexInput clones to prevent accessing an unmapped buffer if the input is closed (Mike McCandless, Uwe Schindler, Robert Muir) * LUCENE-2384: Reset zzBuffer in StandardTokenizerImpl when lexer is reset. (Ruben Laguna via Uwe Schindler, sub-issue of LUCENE-2074) * LUCENE-2658: Exceptions while processing term vectors enabled for multiple fields could lead to invalid ArrayIndexOutOfBoundsExceptions. (Robert Muir, Mike McCandless) * LUCENE-2235: Implement missing PerFieldAnalyzerWrapper.getOffsetGap(). (Javier Godoy via Uwe Schindler) * LUCENE-2328: Fixed memory leak in how IndexWriter/Reader tracked already sync'd files. (Earwin Burrfoot via Mike McCandless) * LUCENE-2549: Fix TimeLimitingCollector#TimeExceededException to record the absolute docid. (Uwe Schindler) * LUCENE-2533: fix FileSwitchDirectory.listAll to not return dups when primary & secondary dirs share the same underlying directory. (Michael McCandless) * LUCENE-2365: IndexWriter.newestSegment (used normally for testing) is fixed to return null if there are no segments. (Karthick Sankarachary via Mike McCandless) * LUCENE-2730: Fix two rare deadlock cases in IndexWriter (Mike McCandless) * LUCENE-2744: CheckIndex was stating total number of fields, not the number that have norms enabled, on the "test: field norms..." output. (Mark Kristensson via Mike McCandless) * LUCENE-2759: Fixed two near-real-time cases where doc store files may be opened for read even though they are still open for write. (Mike McCandless) * LUCENE-2618: Fix rare thread safety issue whereby IndexWriter.optimize could sometimes return even though the index wasn't fully optimized (Mike McCandless) * LUCENE-2767: Fix thread safety issue in addIndexes(IndexReader[]) that could potentially result in index corruption. -2216: OpenBitSet.hashCode returned different hash codes for sets that only differed by trailing zeros. (Dawid Weiss, yonik) * LUCENE-2782: Fix rare potential thread hazard with IndexWriter.commit (Mike McCandless) API Changes *) Optimizations * LUCENE-2556: Improve memory usage after cloning TermAttribute. (Adriano Crestani via Uwe Schindler) * LUCENE-2098: Improve the performance of BaseCharFilter, especially for large documents. (Robin Wojciki, Koji Sekiguchi, Robert Muir) New features * LUCENE-2675 (2.9.4 only): Add support for Lucene 3.0 stored field files also in 2.9. The file format did not change, only the version number was upgraded to mark segments that have no compression. FieldsWriter still only writes 2.9 segments as they could contain compressed fields. This cross-version index format compatibility is provided here solely because Lucene 2.9 and 3.0 have the same bugfix level, features, and the same index format with this slight compression difference. In general, Lucene does not support reading newer indexes with older library versions. (Uwe Schindler) Documentation * LUCENE-2239: Documented limitations in NIOFSDirectory and MMapDirectory due to Java NIO behavior when a Thread is interrupted while blocking on IO. (Simon Willnauer, Robert Muir) ================== Release 2.9.3 / 3.0.2 ==================== Changes in backwards compatibility policy * LUCENE-2135: Added FieldCache.purge(IndexReader) method to the interface. Anyone implementing FieldCache externally will need to fix their code to implement this, on upgrading. (Mike McCandless) Changes in runtime behavior * LUCENE-2421: NativeFSLockFactory does not throw LockReleaseFailedException if it cannot delete the lock file, since obtaining the lock does not fail if the file is there. (Shai Erera) * LUCENE-2060 (2.9.3 only): Changed ConcurrentMergeScheduler's default for maxNumThreads from 3 to 1, because in practice we get the most gains from running a single merge in the backround. More than one concurrent merge causes alot of thrashing (though it's possible on SSD storage that there would be net gains). (Jason Rutherglen, Mike McCandless) Bug fixes * LUCENE-2046 (2.9.3 only): IndexReader should not see the index as changed, after IndexWriter.prepareCommit has been called but before IndexWriter.commit is called. (Peter Keegan via Mike McCandless) * LUCENE-2119: Don't throw NegativeArraySizeException if you pass Integer.MAX_VALUE as nDocs to IndexSearcher search methods. (Paul Taylor via Mike McCandless) * LUCENE-2142: FieldCacheImpl.getStringIndex no longer throws an exception when term count exceeds doc count. (Mike McCandless) * LUCENE-2104: NativeFSLock.release() would silently fail if the lock is held by another thread/process. (Shai Erera via Uwe Schindler) * LUCENE-2283: Use shared memory pool for term vector and stored fields buffers. This memory will be reclaimed if needed according to the configured RAM Buffer Size for the IndexWriter. This also fixes potentially excessive memory usage when many threads are indexing a mix of small and large documents. (Tim Smith via Mike McCandless) * LUCENE-2300: If IndexWriter is pooling reader (because NRT reader has been obtained), and addIndexes* is run, do not pool the readers from the external directory. This is harmless (NRT reader is correct), but a waste of resources. (Mike McCandless) * LUCENE-2422: Don't reuse byte[] in IndexInput/Output -- it gains little performance, and ties up possibly large amounts of memory for apps that index large docs. (Ross Woolf via Mike McCandless) * LUCENE-2387: Don't hang onto Fieldables from the last doc indexed, in IndexWriter, nor the Reader in Tokenizer after close is called. (Ruben Laguna, Uwe Schindler, Mike McCandless) * LUCENE-2417: IndexCommit did not implement hashCode() and equals() consistently. Now they both take Directory and version into consideration. In addition, all of IndexComnmit methods which threw UnsupportedOperationException are now abstract. (Shai Erera) * LUCENE-2467: Fixed memory leaks in IndexWriter when large documents are indexed. (Mike McCandless) * LUCENE-2473: Clicking on the "More Results" link in the luceneweb.war demo resulted in ArrayIndexOutOfBoundsException. (Sami Siren via Robert Muir) * LUCENE-2476: If any exception is hit init'ing IW, release the write lock (previously we only released on IOException). (Tamas Cservenak via Mike McCandless) * LUCENE-2478: Fix CachingWrapperFilter to not throw NPE when Filter.getDocIdSet() returns null. (Uwe Schindler, Daniel Noll) * LUCENE-2468: Allow specifying how new deletions should be handled in CachingWrapperFilter and CachingSpanFilter. By default, new deletions are ignored in CachingWrapperFilter, since typically this filter is AND'd with a query that correctly takes new deletions into account. This should be a performance gain (higher cache hit rate) in apps that reopen readers, or use near-real-time reader (IndexWriter.getReader()), but may introduce invalid search results (allowing deleted docs to be returned) for certain cases, so a new expert ctor was added to CachingWrapperFilter to enforce deletions at a performance cost. CachingSpanFilter by default recaches if there are new deletions (Shay Banon via Mike McCandless) * LUCENE-2299: If you open an NRT reader while addIndexes* is running, it may miss some segments (Earwin Burrfoot via Mike McCandless) * LUCENE-2397: Don't throw NPE from SnapshotDeletionPolicy.snapshot if there are no commits yet (Shai Erera) * LUCENE-2424: Fix FieldDoc.toString to actually return its fields (Stephen Green via Mike McCandless) * LUCENE-2311: Always pass a "fully loaded" (terms index & doc stores) SegmentsReader to IndexWriter's mergedSegmentWarmer (if set), so that warming is free to do whatever it needs to. (Earwin Burrfoot via Mike McCandless) * LUCENE-3029: Fix corner case when MultiPhraseQuery is used with zero position-increment tokens that would sometimes assign different scores to identical docs. (Mike McCandless) * LUCENE-2486: Fixed intermittent FileNotFoundException on doc store files when a mergedSegmentWarmer is set on IndexWriter. (Mike McCandless) * LUCENE-2130: Fix performance issue when FuzzyQuery runs on a multi-segment index (Michael McCandless) API Changes * LUCENE-2281: added doBeforeFlush to IndexWriter to allow extensions to perform operations before flush starts. Also exposed doAfterFlush as protected instead of package-private. (Shai Erera via Mike McCandless) * LUCENE-2356: Add IndexWriter.set/getReaderTermsIndexDivisor, to set what IndexWriter passes for termsIndexDivisor to the readers it opens internally when applying deletions or creating a near-real-time reader. (Earwin Burrfoot via Mike McCandless) Optimizations * LUCENE-2494 (3.0.2 only): Use CompletionService in ParallelMultiSearcher instead of simple polling for results. (Edward Drapkin, Simon Willnauer) * LUCENE-2135: On IndexReader.close, forcefully evict any entries from the FieldCache rather than waiting for the WeakHashMap to release the reference (Mike McCandless) * LUCENE-2161: Improve concurrency of IndexReader, especially in the context of near real-time readers. (Mike McCandless) * LUCENE-2360: Small speedup to recycling of reused per-doc RAM in IndexWriter (Robert Muir, Mike McCandless) Build * LUCENE-2488 (2.9.3 only): Support build with JDK 1.4 and exclude Java 1.5 contrib modules on request (pass '-Dforce.jdk14.build=true') when compiling/testing/packaging. This marks the benchmark contrib also as Java 1.5, as it depends on fast-vector-highlighter. (Uwe Schindler) ================== Release 2.9.2 / 3.0.1 ==================== Changes in backwards compatibility policy * LUCENE-2123 (3.0.1 only): Removed the protected inner class ScoreTerm from FuzzyQuery. The change was needed because the comparator of this class had to be changed in an incompatible way. The class was never intended to be public. (Uwe Schindler, Mike McCandless) Bug fixes *132 (3.0.1 only): Fix the demo result.jsp to use QueryParser with a Version argument. (Brian Li via Robert Muir) *) API Changes * LUCENE-1609 (3.0.1 only): Restore IndexReader.getTermInfosIndexDivisor (it was accidentally removed in 3.0.0) (Mike McCandless) * LUCENE-1972 (3.0.1 only): Restore SortField.getComparatorSource (it was accidentally removed in 3.0.0) (John Wang via Uwe Schindler) *-2123 (partly, 3.0.1 only): Fixes a slowdown / memory issue added by LUCENE-504. (Uwe Schindler, Robert Muir, 3.0.0 ======================= Changes in backwards compatibility policy * LUCENE-1979: Change return type of SnapshotDeletionPolicy#snapshot() from IndexCommitPoint to IndexCommit. Code that uses this method needs to be recompiled against Lucene 3.0 in order to work. The previously deprecated IndexCommitPoint is also removed. (Michael Busch) * o.a.l.Lock.isLocked() is now allowed to throw an IOException. (Mike McCandless) * LUCENE-2030: CachingWrapperFilter and CachingSpanFilter now hide the internal cache implementation for thread safety, before it was declared protected. (Peter Lenahan, Uwe Schindler, Simon Willnauer) * LUCENE-2053: If you call Thread.interrupt() on a thread inside Lucene, Lucene will do its best to interrupt the thread. However, instead of throwing InterruptedException (which is a checked exception), you'll get an oal.util.ThreadInterruptedException (an unchecked exception, subclassing RuntimeException). The interrupt status on the thread is cleared when this exception is thrown. (Mike McCandless) * LUCENE-2052: Some methods in Lucene core were changed to accept Java 5 varargs. This is not a backwards compatibility problem as long as you not try to override such a method. We left common overridden methods unchanged and added varargs to constructors, static, or final methods (MultiSearcher,...). (Uwe Schindler) * LUCENE-1558: IndexReader.open(Directory) now opens a readOnly=true reader, and new IndexSearcher(Directory) does the same. Note that this is a change in the default from 2.9, when these methods were previously deprecated. (Mike McCandless) * LUCENE-1753: Make not yet final TokenStreams final to enforce decorator pattern. (Uwe Schindler) Changes in runtime behavior * LUCENE-1677: Remove the system property to set SegmentReader class implementation. (Uwe Schindler) * LUCENE-1960: As a consequence of the removal of Field.Store.COMPRESS, support for this type of fields was removed. Lucene 3.0 is still able to read indexes with compressed fields, but as soon as merges occur or the index is optimized, all compressed fields are decompressed and converted to Field.Store.YES. Because of this, indexes with compressed fields can suddenly get larger. Also the first merge with decompression cannot be done in raw mode, it is therefore slower. This change has no effect for code that uses such old indexes, they behave as before (fields are automatically decompressed during read). Indexes converted to Lucene 3.0 format cannot be read anymore with previous versions. It is recommended to optimize your indexes after upgrading to convert to the new format and decompress all fields. If you want compressed fields, you can use CompressionTools, that creates compressed byte[] to be added as binary stored field. This cannot be done automatically, as you also have to decompress such fields when reading. You have to reindex to do that. (Michael Busch, Uwe Schindler) * LUCENE-2060: Changed ConcurrentMergeScheduler's default for maxNumThreads from 3 to 1, because in practice we get the most gains from running a single merge in the background. More than one concurrent merge causes a lot of thrashing (though it's possible on SSD storage that there would be net gains). (Jason Rutherglen, Mike McCandless) API Changes * LUCENE-1257, LUCENE-1984, LUCENE-1985, LUCENE-2057, LUCENE-1833, LUCENE-2012, LUCENE-1998: Port to Java 1.5: - Add generics to public and internal APIs (see below). - Replace new Integer(int), new Double(double),... by static valueOf() calls. - Replace for-loops with Iterator by foreach loops. - Replace StringBuffer with StringBuilder. - Replace o.a.l.util.Parameter by Java 5 enums (see below). - Add @Override annotations. (Uwe Schindler, Robert Muir, Karl Wettin, Paul Elschot, Kay Kay, Shai Erera, DM Smith) * Generify Lucene API: - TokenStream/AttributeSource: Now addAttribute()/getAttribute() return an instance of the requested attribute interface and no cast needed anymore (LUCENE-1855). - NumericRangeQuery, NumericRangeFilter, and FieldCacheRangeFilter now have Integer, Long, Float, Double as type param (LUCENE-1857). - Document.getFields() returns List<Fieldable>. - Query.extractTerms(Set<Term>) - CharArraySet and stop word sets in core/contrib - PriorityQueue (LUCENE-1935) - TopDocCollector - DisjunctionMaxQuery (LUCENE-1984) - MultiTermQueryWrapperFilter - CloseableThreadLocal - MapOfSets - o.a.l.util.cache package - lot's of internal APIs of IndexWriter (Uwe Schindler, Michael Busch, Kay Kay, Robert Muir, Adriano Crestani) * LUCENE-1944, LUCENE-1856, LUCENE-1957, LUCENE-1960, LUCENE-1961, LUCENE-1968, LUCENE-1970, LUCENE-1946, LUCENE-1971, LUCENE-1975, LUCENE-1972, LUCENE-1978, LUCENE-944, LUCENE-1979, LUCENE-1973, LUCENE-2011: Remove deprecated methods/constructors/classes: - Remove all String/File directory paths in IndexReader / IndexSearcher / IndexWriter. - Remove FSDirectory.getDirectory() - Make FSDirectory abstract. - Remove Field.Store.COMPRESS (see above). - Remove Filter.bits(IndexReader) method and make Filter.getDocIdSet(IndexReader) abstract. - Remove old DocIdSetIterator methods and make the new ones abstract. - Remove some methods in PriorityQueue. - Remove old TokenStream API and backwards compatibility layer. - Remove RangeQuery, RangeFilter and ConstantScoreRangeQuery. - Remove SpanQuery.getTerms(). - Remove ExtendedFieldCache, custom and auto caches, SortField.AUTO. - Remove old-style custom sort. - Remove legacy search setting in SortField. - Remove Hits and all references from core and contrib. - Remove HitCollector and its TopDocs support implementations. - Remove term field and accessors in MultiTermQuery (and fix Highlighter). - Remove deprecated methods in BooleanQuery. - Remove deprecated methods in Similarity. - Remove BoostingTermQuery. - Remove MultiValueSource. - Remove Scorer.explain(int). ...and some other minor ones (Uwe Schindler, Michael Busch, Mark Miller) * LUCENE-1925: Make IndexSearcher's subReaders and docStarts members protected; add expert ctor to directly specify reader, subReaders and docStarts. (John Wang, Tim Smith via Mike McCandless) * LUCENE-1945: All public classes that have a close() method now also implement java.io.Closeable (IndexReader, IndexWriter, Directory,...). (Uwe Schindler) * LUCENE-1998: Change all Parameter instances to Java 5 enums. This is no backwards-break, only a change of the super class. Parameter was deprecated and will be removed in a later version. (DM Smith, Uwe Schindler) Bug fixes * LUCENE-1951: When the text provided to WildcardQuery has no wildcard characters (ie matches a single term), don't lose the boost and rewrite method settings. Also, rewrite to PrefixQuery if the wildcard is form "foo*", for slightly faster performance. (Robert Muir via Mike McCandless) * LUCENE-2013: SpanRegexQuery does not work with QueryScorer. (Benjamin Keil via Mark Miller) * LUCENE-2088: addAttribute() should only accept interfaces that extend Attribute. (Shai Erera, Uwe Schindler) * LUCENE-2045: Fix silly FileNotFoundException hit if you enable infoStream on IndexWriter and then add an empty document and commit (Shai Erera via Mike McCandless) * LUCENE-2046: IndexReader should not see the index as changed, after IndexWriter.prepareCommit has been called but before IndexWriter.commit is called. (Peter Keegan via Mike McCandless) New features * LUCENE-1933: Provide a convenience AttributeFactory that creates a Token instance for all basic attributes. (Uwe Schindler) * LUCENE-2041: Parallelize the rest of ParallelMultiSearcher. Lots of code refactoring and Java 5 concurrent support in MultiSearcher. (Joey Surls, Simon Willnauer via Uwe Schindler) * LUCENE-2051: Add CharArraySet.copy() as a simple method to copy any Set<?> to a CharArraySet that is optimized, if Set> is already an CharArraySet. (Simon Willnauer) Optimizations * LUCENE-1183: Optimize Levenshtein Distance computation in FuzzyQuery. (Cédrik Lime via Mike McCandless) * LUCENE-2006: Optimization of FieldDocSortedHitQueue to always use Comparable<?> interface. (Uwe Schindler, Mark Miller) * LUCENE-2087: Remove recursion in NumericRangeTermEnum. (Uwe Schindler) Build * LUCENE-486: Remove test->demo dependencies. (Michael Busch) * LUCENE-2024: Raise build requirements to Java 1.5 and ANT 1.7.0 (Uwe Schindler, Mike McCandless) ======================= Release 2.9.1 =======================) * LUCENE-2043: Make IndexReader.commit(Map<String,String>) public. ======================= Changes in backwards compatibility policy * LUCENE-1575: Searchable.search(Weight, Filter, int, Sort) no longer computes a document score for each hit by default. If document score tracking is still needed, you can call IndexSearcher.setDefaultFieldSortScoring(true, true) to enable both per-hit and maxScore tracking; however, this is deprecated and will be removed in 3.0. Alternatively, use Searchable.search(Weight, Filter, Collector) and pass in a TopFieldCollector instance, using the following code sample: <code> TopFieldCollector tfc = TopFieldCollector.create(sort, numHits, fillFields, true /* trackDocScores */, true /* trackMaxScore */, false /* docsInOrder */); searcher.search(query, tfc); TopDocs results = tfc.topDocs(); </code> Note that your Sort object cannot use SortField.AUTO when you directly instantiate TopFieldCollector. Also, the method search(Weight, Filter, Collector) was added to the Searchable interface and the Searcher abstract class to replace the deprecated HitCollector versions. If you either implement Searchable or extend Searcher, you should change. If you rely on this functionality you can use PositiveScoresOnlyCollector like this: <code> TopDocsCollector tdc = new TopScoreDocCollector(10); Collector c = new PositiveScoresOnlyCollector(tdc); searcher.search(query, c); TopDocs hits = tdc.topDocs(); ... </code> * LUCENE-1604: IndexReader.norms(String field) is now allowed to return null if the field has no norms, as long as you've previously called IndexReader.setDisableFakeNorms(true). This setting now defaults to false (to preserve the fake norms back compatible behavior) but in 3.0 will be hardwired to true. (Shon Vella via Mike McCandless). * LUCENE-1624: If you open IndexWriter with create=true and autoCommit=false on an existing index, IndexWriter no longer writes an empty commit when it's created. (Paul Taylor via Mike McCandless) * LUCENE-1593: When you call Sort() or Sort.setSort(String field, boolean reverse), the resulting SortField array no longer ends with SortField.FIELD_DOC (it was unnecessary as Lucene breaks ties internally by docID). (Shai Erera via Michael-1715: Finalizers have been removed from the 4 core classes that still had them, since they will cause GC to take longer, thus tying up memory for longer, and at best they mask buggy app code. DirectoryReader (returned from IndexReader.open) & IndexWriter previously released the write lock during finalize. SimpleFSDirectory.FSIndexInput closed the descriptor in its finalizer, and NativeFSLock released the lock. It's possible applications will be affected by this, but only if the application is failing to close reader/writers. (Brian Groose-1468: Deprecate Directory.list(), which sometimes (in FSDirectory) filters out files that don't look like index files, in favor of new Directory.listAll(), which does no filtering. Also, listAll() will never return null; instead, it throws an IOException (or subclass). Specifically, FSDirectory.listAll() will throw the newly added NoSuchDirectoryException if the directory does not exist. (Marcel Reutegger, Mike McCandless) * LUCENE-1546: Add IndexReader.flush(Map commitUserData), allowing you to record an opaque commitUserData (maps String -> String) into the commit written by IndexReader. This matches IndexWriter's commit methods. (Jason Rutherglen via Mike McCandless) *, Mark Miller, Mike McCandless) * LUCENE-1592: The method TermsEnum.skipTo() was deprecated, because it is used nowhere in core/contrib and there is only a very ineffective default implementation available. If you want to position a TermEnum to another Term, create a new one using IndexReader.terms(Term). (Uwe Schindler) *-1614: DocIdSetIterator's next() and skipTo() were deprecated in favor of the new nextDoc() and advance(). The new methods return the doc Id they landed on, saving an extra call to doc() in most cases. For easy migration of the code, you can change the calls to next() to nextDoc() != DocIdSetIterator.NO_MORE_DOCS and similarly for skipTo(). However it is advised that you take advantage of the returned doc ID and not call doc() following those two. Also, doc() was deprecated in favor of docID(). docID() should return -1 or NO_MORE_DOCS if nextDoc/advance were not called yet, or NO_MORE_DOCS if the iterator has exhausted. Otherwise it should return the current doc ID. (Shai Erera via Mike McCandless) * LUCENE-1672: All ctors/opens and other methods using String/File to specify the directory in IndexReader, IndexWriter, and IndexSearcher were deprecated. You should instantiate the Directory manually before and pass it to these classes (LUCENE-1451, LUCENE-1658). (Uwe Schindler) *77: The global property org.apache.lucene.SegmentReader.class, and ReadOnlySegmentReader.class are now deprecated, to be removed in 3.0. src/gcj/* has been removed. (Earwin BurrfootIDs. Some Scorers (like BooleanScorer) are much more efficient if out-of-order documents scoring is allowed by a Collector. Collector must now implement acceptsDocsOutOfOrder. If you write a Collector which does not care about doc ID orderness, it is recommended that you return) * LUCENE-1573: Do not ignore InterruptedException (caused by Thread.interrupt()) nor enter deadlock/spin loop. Now, an interrupt will cause a RuntimeException to be thrown. In 3.0 we will change public APIs to throw InterruptedException. (Jeremy Volkman via Mike McCandless) * LUCENE-1590: Fixed stored-only Field instances do not change the value of omitNorms, omitTermFreqAndPositions in FieldInfo; when you retrieve such fields they will now have omitNorms=true and omitTermFreqAndPositions=false (though these values are unused). (Uwe Schindler via Mike McCandless) *11: Added expert API to open an IndexWriter on a prior commit, obtained from IndexReader.listCommits. This makes it possible to rollback changes to an index even after you've closed the IndexWriter that made the changes, assuming you are using an IndexDeletionPolicy that keeps past commits around. This is useful when building transactional support on top of Lucene. (Mike McCandless) * LUCENE-1382: Add an optional arbitrary Map (String -> String) "commitUserData" to IndexWriter.commit(), which is stored in the segments file and is then retrievable via IndexReader.getCommitUserData instance and static methods. (Shalin Shekhar Mangar via Mike McCandless) * LUCENE-1420: Similarity now has a computeNorm method that allows custom Similarity classes to override how norm is computed. It's provided a FieldInvertState instance that contains details from inverting the field. The default impl is boost * lengthNorm(numTerms), to be backwards compatible. Also added {set/get}DiscountOverlaps to DefaultSimilarity, to control whether overlapping tokens (tokens with 0 position increment) should be counted in lengthNorm. (Andrzej Bialecki via Mike McCandless) *-1461: Added FieldCacheRangeFilter, a RangeFilter for single-term fields that uses FieldCache to compute the filter. If your documents all have a single term for a given field, and you need to create many RangeFilters with varying lower/upper bounds, then this is likely a much faster way to create the filters than RangeFilter. FieldCacheRangeFilter allows ranges on all data types, FieldCache supports (term ranges, byte, short, int, long, float, double). However, it comes at the expense of added RAM consumption and slower first-time usage due to populating the FieldCache. It also does not support collation (Tim Sturge, Matt Ericson via Mike McCandless and Uwe Schindler) * LUCENE-1296: add protected method CachingWrapperFilter.docIdSetToCache to allow subclasses to choose which DocIdSet implementation to use (Paul Elschot via Mike McCandless) * LUCENE-1390: Added ASCIIFoldingFilter, a Filter that converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if one exists. ISOLatin1AccentFilter, which handles a subset of this filter, has been deprecated. (Andi Vajda, Steven Rowe via Mark Miller) *-1487: Added FieldCacheTermsFilter, to filter by multiple terms on single-valued fields. The filter loads the FieldCache for the field the first time it's called, and subsequent usage of that field, even with different Terms in the filter, are fast. (Tim Sturge, Shalin Shekhar Mangar via Mike McCandless). * LUCENE-1314: Add clone(), clone(boolean readOnly) and reopen(boolean readOnly) to IndexReader. Cloning an IndexReader gives you a new reader which you can make changes to (deletions, norms) without affecting the original reader. Now, with clone or reopen you can change the readOnly of the original reader. (Jason Rutherglen, Mike McCandless) *-1434: Added org.apache.lucene.util.IndexableBinaryStringTools, to encode byte[] as String values that are valid terms, and maintain sort order of the original byte[] when the bytes are interpreted as unsigned. (Steven Rowe-1516: Added "near real-time search" to IndexWriter, via a new expert getReader() method. This method returns a reader that searches the full index, including any uncommitted changes in the current IndexWriter session. This should result in a faster turnaround than the normal approach of commiting the changes and then reopening a reader. (Jason Rutherglen via Mike McCandless) * LUCENE-1603: Added new MultiTermQueryWrapperFilter, to wrap any MultiTermQuery as a Filter. Also made some improvements to MultiTermQuery: return DocIdSet.EMPTY_DOCIDSET if there are no terms in the enum; track the total number of terms it visited during rewrite (getTotalNumberOfTerms). FilteredTermEnum is also more friendly to subclassing. (Uwe Schindler via Mike McCandless) * LUCENE-1605: Added BitVector.subset(). (Jeremy Volkman via Mike McCandless) * LUCENE-1618: Added FileSwitchDirectory that enables files with specified extensions to be stored in a primary directory and the rest of the files to be stored in the secondary directory. For example, this can be useful for the large doc-store (stored fields, term vectors) files in FSDirectory and the rest of the index files in a RAMDirectory. (Jason Rutherglen: Added NumericRangeQuery and NumericRangeFilter, a fast alternative to RangeQuery/RangeFilter for numeric searches. They depend on a specific structure of terms in the index that can be created by indexing using the new NumericField or NumericTokenStream classes. NumericField can only be used for indexing and optionally stores the values as string representation in the doc store. Documents returned from IndexReader/IndexSearcher will return only the String value using the standard Fieldable interface. NumericFields can be sorted on and loaded into the FieldCache. (Uwe Schindler, Yonik Seeley, Mike McCandless) * LUCENE-1405: Added support for Ant resource collections in contrib/ant <index> task. (Przemyslaw Sztoch via Erik Hatcher) * LUCENE-1699: Allow setting a TokenStream on Field/Fieldable for indexing in conjunction with any other ways to specify stored field values, currently binary or string values. (yonik) * LUCENE-1701: Made the standard FieldCache.Parsers public and added parsers for fields generated using NumericField/NumericTokenStream. All standard parsers now also implement Serializable and enforce their singleton status. (Uwe Schindler, Mike McCandless) * LUCENE-1741: User configurable maximum chunk size in MMapDirectory. On 32 bit platforms, the address space can be very fragmented, so one big ByteBuffer for the whole file may not fit into address space. (Eks Dev via Uwe Schindler) * LUCENE-1644: Enable 4 rewrite modes for queries deriving from MultiTermQuery (WildcardQuery, PrefixQuery, TermRangeQuery, NumericRangeQuery): CONSTANT_SCORE_FILTER_REWRITE first creates a filter and then assigns constant score (boost) to docs; CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE create a BooleanQuery but uses a constant score (boost); SCORING_BOOLEAN_QUERY_REWRITE also creates a BooleanQuery but keeps the BooleanQuery's scores; CONSTANT_SCORE_AUTO_REWRITE tries to pick the most performant constant-score rewrite method. (Mike McCandless) * LUCENE-1448: Added TokenStream.end(), to perform end-of-stream operations. This is currently used to fix offset problems when multiple fields with the same name are added to a document. (Mike McCandless, Mark Miller, Michael Busch) * LUCENE-1776: Add an option to not collect payloads for an ordered SpanNearQuery. Payloads were not lazily loaded in this case as the javadocs implied. If you have payloads and want to use an ordered SpanNearQuery that does not need to use the payloads, you can disable loading them with a new constructor switch. (Mark Miller) * LUCENE-1341: Added PayloadNearQuery to enable SpanNearQuery functionality with payloads (Peter Keegan, Grant Ingersoll, Mark Miller) * LUCENE-1790: Added PayloadTermQuery to enable scoring of payloads based on the maximum payload seen for a document. Slight refactoring of Similarity and other payload queries (Grant Ingersoll, Mark Miller) * LUCENE-1749: Addition of FieldCacheSanityChecker utility, and hooks to use it in all existing Lucene Tests. This class can be used by any application to inspect the FieldCache and provide diagnostic information about the possibility of inconsistent FieldCache usage. Namely: FieldCache entries for the same field with different datatypes or parsers; and FieldCache entries for the same field in both a reader, and one of it's (descendant) sub readers. (Chris Hostetter, Mark Miller) * LUCENE-1789: Added utility class oal.search.function.MultiValueSource to ease the transition to segment based searching for any apps that directly call oal.search.function.* APIs. This class wraps any other ValueSource, but takes care when composite (multi-segment) are passed to not double RAM usage in the FieldCache. (Chris Hostetter, Mark Miller, Mike McCandless) Optimizations * LUCENE-1427: Fixed QueryWrapperFilter to not waste time computing scores of the query, since they are just discarded. Also, made it more efficient (single pass) by not creating & populating an intermediate OpenBitSet (Paul Elschot, Mike McCandless) * LUCENE-1443: Performance improvement for OpenBitSetDISI.inPlaceAnd() (Paul Elschot via yonik) * LUCENE-1484: Remove synchronization of IndexReader.document() by using CloseableThreadLocal internally. (Jason Rutherglen via Mike McCandless). * LUCENE-1124: Short circuit FuzzyQuery.rewrite when input token length is small compared to minSimilarity. (Timo Nentwig, Mark Miller) * LUCENE-1316: MatchAllDocsQuery now avoids the synchronized IndexReader.isDeleted() call per document, by directly accessing the underlying deleteDocs BitVector. This improves performance with non-readOnly readers, especially in a multi-threaded environment. (Todd Feak, Yonik Seeley, Jason Rutherglen via Mike McCandless) * LUCENE-1483: When searching over multiple segments we now visit each sub-reader one at a time. This speeds up warming, since FieldCache entries (if required) can be shared across reopens for those segments that did not change, and also speeds up searches that sort by relevance or by field values. (Mark Miller, Mike McCandless) * LUCENE-1575: The new Collector class decouples collect() from score computation. Collector.setScorer is called to establish the current Scorer in-use per segment. Collectors that require the score should then call Scorer.score() per hit inside collect(). (Shai Erera via Mike McCandless) * LUCENE-1596: MultiTermDocs speedup when set with MultiTermDocs.seek(MultiTermEnum) (yonik) * LUCENE-1653: Avoid creating a Calendar in every call to DateTools#dateToString, DateTools#timeToString and DateTools#round. (Shai Erera via Mark Miller) * LUCENE-1688: Deprecate static final String stop word array and replace it with an immutable implementation of CharArraySet. Removes conversions between Set and array. (Simon Willnauer via Mark Miller) * LUCENE-1754: BooleanQuery.queryWeight.scorer() will return null if it won't match any documents (e.g. if there are no required and optional scorers, or not enough optional scorers to satisfy minShouldMatch). (Shai Erera via Mike McCandless) * LUCENE-1607: To speed up string interning for commonly used strings, the StringHelper.intern() interface was added with a default implementation that uses a lockless cache. (Earwin Burrfoot, yonik) * LUCENE-1800: QueryParser should use reusable TokenStreams. (yonik) Documentation * LUCENE-1908: Scoring documentation imrovements in Similarity javadocs. (Mark Miller, Shai Erera, Ted Dunning, Jiri Kuhn, Marvin Humphrey, Doron Cohen) * LUCENE-1872: NumericField javadoc improvements (Michael McCandless, Uwe Schindler) * LUCENE-1875: Make TokenStream.end javadoc less confusing. (Uwe Schindler) * LUCENE-1862: Rectified duplicate package level javadocs for o.a.l.queryParser and o.a.l.analysis.cn. (Chris Hostetter) * LUCENE-1886: Improved hyperlinking in key Analysis javadocs (Bernd Fondermann via Chris Hostetter) * LUCENE-1884: massive javadoc and comment cleanup, primarily dealing with typos. (Robert Muir via Chris Hostetter) * LUCENE-1898: Switch changes to use bullets rather than numbers and update changes-to-html script to handle the new format. (Steven Rowe, Mark Miller) * LUCENE-1900: Improve Searchable Javadoc. (Nadav Har'El, Doron Cohen, Marvin Humphrey, Mark Miller) * LUCENE-1896: Improve Similarity#queryNorm javadocs. (Jiri Kuhn, Mark Miller) Build * LUCENE-1440: Add new targets to build.xml that allow downloading and executing the junit testcases from an older release for backwards-compatibility testing. (Michael Busch) * LUCENE-1446: Add compatibility tag to common-build.xml and run backwards-compatibility tests in the nightly build. (Michael Busch) * LUCENE-1529: Properly test "drop-in" replacement of jar with backwards-compatibility tests. (Mike McCandless, Michael Busch) * LUCENE-1851: Change 'javacc' and 'clean-javacc' targets to build and clean contrib/surround files. (Luis Alves via Michael Busch) * LUCENE-1854: tar task should use longfile="gnu" to avoid false file name length warnings. (Mark Miller) Test Cases * LUCENE-1791: Enhancements to the QueryUtils and CheckHits utility classes to wrap IndexReaders and Searchers in MultiReaders or MultiSearcher when possible to help exercise more edge cases. (Chris Hostetter, Mark Miller) * LUCENE-1852: Fix localization test failures. (Robert Muir via Michael Busch) * LUCENE-1843: Refactored all tests that use assertAnalyzesTo() & others in core and contrib to use a new BaseTokenStreamTestCase base class. Also rewrote some tests to use this general analysis assert functions instead of own ones (e.g. TestMappingCharFilter). The new base class also tests tokenization with the TokenStream.next() backwards layer enabled (using Token/TokenWrapper as attribute implementation) and disabled (default for Lucene 3.0) (Uwe Schindler, Robert Muir) * LUCENE-1836: Added a new LocalizedTestCase as base class for localization junit tests. (Robert Muir, Uwe Schindler via Michael Busch) ======================= Release 2.4.1 ======================= API Changes 1. LUCENE-1186: Add Analyzer.close() to free internal ThreadLocal resources. (Christian Kohlschütter via Mike McCandless) Bug fixes 1. LUCENE-1452: Fixed silent data-loss case whereby binary fields are truncated to 0 bytes during merging if the segments being merged are non-congruent (same field name maps to different field numbers). This bug was introduced with LUCENE-1219. (Andrzej Bialecki via Mike McCandless). 2. LUCENE-1429: Don't throw incorrect IllegalStateException from IndexWriter.close() if you've hit an OOM when autoCommit is true. (Mike McCandless) 3. LUCENE-1474: If IndexReader.flush() is called twice when there were pending deletions, it could lead to later false AssertionError during IndexReader.open. (Mike McCandless) 4. LUCENE-1430: Fix false AlreadyClosedException from IndexReader.open (masking an actual IOException) that takes String or File path. (Mike McCandless) 5. LUCENE-1442: Multiple-valued NOT_ANALYZED fields can double-count token offsets. (Mike McCandless) 6. LUCENE-1453: Ensure IndexReader.reopen()/clone() does not result in incorrectly closing the shared FSDirectory. This bug would only happen if you use IndexReader.open() with a File or String argument. The returned readers are wrapped by a FilterIndexReader that correctly handles closing of directory after reopen()/clone(). (Mark Miller, Uwe Schindler, Mike McCandless) 7. LUCENE-1457: Fix possible overflow bugs during binary searches. (Mark Miller via Mike McCandless) 8. LUCENE-1459: Fix CachingWrapperFilter to not throw exception if both bits() and getDocIdSet() methods are called. (Matt Jones via Mike McCandless) 9. LUCENE-1519: Fix int overflow bug during segment merging. (Deepak via Mike McCandless) 10. LUCENE-1521: Fix int overflow bug when flushing segment. (Shon Vella via Mike McCandless). 11. LUCENE-1544: Fix deadlock in IndexWriter.addIndexes(IndexReader[]). (Mike McCandless via Doug Sale) 12. LUCENE-1547: Fix rare thread safety issue if two threads call IndexWriter commit() at the same time. (Mike McCandless) 13. LUCENE-1465: NearSpansOrdered returns payloads from first possible match rather than the correct, shortest match; Payloads could be returned even if the max slop was exceeded; The wrong payload could be returned in certain situations. (Jonathan Mamou, Greg Shackles, Mark Miller) 14. LUCENE-1186: Add Analyzer.close() to free internal ThreadLocal resources. (Christian Kohlschütter via Mike McCandless) 15. LUCENE-1552: Fix IndexWriter.addIndexes(IndexReader[]) to properly rollback IndexWriter's internal state on hitting an exception. (Scott Garland via Mike McCandless) ======================= Release 2.4.0 ======================= 1.) 2.) 3.) 4. LUCENE-1396: Improve PhraseQuery.toString() so that gaps in the positions are indicated with a ? and multiple terms at the same position are joined with a |. (Andrzej Bialecki via Mike McCandless) API Changes 1.) 2.) 3. LUCENE-1044: Added IndexWriter.commit() which flushes any buffered adds/deletes and then commits a new segments file so readers will see the changes. Deprecate IndexWriter.flush() in favor of IndexWriter.commit(). (Mike McCandless) 4.) 5. LUCENE-1233: Return empty array instead of null when no fields match the specified name in these methods in Document: getFieldables, getFields, getValues, getBinaryValues. (Stefan Trcek vai Mike McCandless) 6. LUCENE-1234: Make BoostingSpanScorer protected. (Andi Vajda via Grant Ingersoll)) 8. LUCENE-852: Let the SpellChecker caller specify IndexWriter mergeFactor and RAM buffer size. (Otis Gospodnetic) 9. LUCENE-1290: Deprecate org.apache.lucene.search.Hits, Hit and HitIterator and remove all references to these classes from the core. Also update demos and tutorials. (Michael Busch) 10. LUCENE-1288: Add getVersion() and getGeneration() to IndexCommit. getVersion() returns the same value that IndexReader.getVersion() returns when the reader is opened on the same commit. (Jason Rutherglen via Mike McCandless) 11.) 12. LUCENE-1325: Added IndexCommit.isOptimized(). (Shalin Shekhar Mangar via Mike McCandless) 13. LUCENE-1324: Added TokenFilter.reset(). (Shai Erera via Mike McCandless) 14. LUCENE-1340: Added Fieldable.omitTf() method to skip indexing term frequency, positions and payloads. This saves index space, and indexing/searching time. (Eks Dev via Mike McCandless) 15. LUCENE-1219: Add basic reuse API to Fieldable for binary fields: getBinaryValue/Offset/Length(); currently only lazy fields reuse the provided byte[] result to getBinaryValue. (Eks Dev via Mike McCandless) 16. LUCENE-1334: Add new constructor for Term: Term(String fieldName) which defaults term text to "". (DM Smith via Mike McCandless) 17.)) 19. LUCENE-1367: Add IndexCommit.isDeleted(). (Shalin Shekhar Mangar via Mike McCandless) 20. LUCENE-1061: Factored out all "new XXXQuery(...)" in QueryParser.java into protected methods newXXXQuery(...) so that subclasses can create their own subclasses of each Query type. (John Wang via Mike McCandless) 21.) 22. LUCENE-1371: Added convenience method TopDocs Searcher.search(Query query, int n). (Mike McCandless) 23. LUCENE-1356: Allow easy extensions of TopDocCollector by turning constructor and fields from package to protected. (Shai Erera via Doron Cohen) 24. LUCENE-1375: Added convenience method IndexCommit.getTimestamp, which is equivalent to getDirectory().fileModified(getSegmentsFileName()). (Mike McCandless) 23. LUCENE-1366: Rename Field.Index options to be more accurate: TOKENIZED becomes ANALYZED; UN_TOKENIZED becomes NOT_ANALYZED; NO_NORMS becomes NOT_ANALYZED_NO_NORMS and a new ANALYZED_NO_NORMS is added. (Mike McCandless) 24. LUCENE-1131: Added numDeletedDocs method to IndexReader (Otis Gospodnetic) Bug fixes 1. LUCENE-1134: Fixed BooleanQuery.rewrite to only optimize a single clause query if minNumShouldMatch<=0. (Shai Erera via Michael Busch)) 3. LUCENE-1182: Added scorePayload to SimilarityDelegator (Andi Vajda via Grant Ingersoll) 4. LUCENE-1213: MultiFieldQueryParser was ignoring slop in case of a single field phrase. (Trejkaz via Doron Cohen) 5. LUCENE-1228: IndexWriter.commit() was not updating the index version and as result IndexReader.reopen() failed to sense index changes. (Doron Cohen) 6. LUCENE-1267: Added numDocs() and maxDoc() to IndexWriter; deprecated docCount(). (Mike McCandless) 7.) 8. LUCENE-1003: Stop RussianAnalyzer from removing numbers. (TUSUR OpenTeam, Dmitry Lihachev via Otis Gospodnetic) 9. LUCENE-1152: SpellChecker fix around clearIndex and indexDictionary methods, plus removal of IndexReader reference. (Naveen Belkale via Otis Gospodnetic) 10. LUCENE-1046: Removed dead code in SpellChecker (Daniel Naber via Otis Gospodnetic) 11. LUCENE-1189: Fixed the QueryParser to handle escaped characters within quoted terms correctly. (Tomer Gabel via Michael Busch) 12. LUCENE-1299: Fixed NPE in SpellChecker when IndexReader is not null and field is (Grant Ingersoll) 13.) 14. LUCENE-1310: Fixed SloppyPhraseScorer to work also for terms repeating more than twice in the query. (Doron Cohen) 15. LUCENE-1351: ISOLatin1AccentFilter now cleans additional ligatures (Cedrik Lime via Grant Ingersoll) 16. LUCENE-1383: Workaround a nasty "leak" in Java's builtin ThreadLocal, to prevent Lucene from causing unexpected OutOfMemoryError in certain situations (notably J2EE applications). (Chris Lu via Mike McCandless) New features 1. LUCENE-1137: Added Token.set/getFlags() accessors for passing more information about a Token through the analysis process. The flag is not indexed/stored and is thus only used by analysis. 2. LUCENE-1147: Add -segment option to CheckIndex tool so you can check only a specific segment or segments in your index. (Mike McCandless) 3. LUCENE-1045: Reopened this issue to add support for short and bytes. 4.) 5. LUCENE-494: Added QueryAutoStopWordAnalyzer to allow for the automatic removal, from a query of frequently occurring terms. This Analyzer is not intended for use during indexing. (Mark Harwood via Grant Ingersoll)) 7.) 8. LUCENE-1184: Allow SnapshotDeletionPolicy to be re-used across close/re-open of IndexWriter while still protecting an open snapshot (Tim Brennan via Mike McCandless) 9. LUCENE-1194: Added IndexWriter.deleteDocuments(Query) to delete documents matching the specified query. Also added static unlock and isLocked methods (deprecating the ones in IndexReader). (Mike McCandless) 10. LUCENE-1201: Add IndexReader.getIndexCommit() method. (Tim Brennan via Mike McCandless) 11. LUCENE-550: Added InstantiatedIndex implementation. Experimental Index store similar to MemoryIndex but allows for multiple documents in memory. (Karl Wettin via Grant Ingersoll) 12.) 13. LUCENE-1166: Decomposition tokenfilter for languages like German and Swedish (Thomas Peuss via Grant Ingersoll) 14. LUCENE-1187: ChainedFilter and BooleanFilter now work with new Filter API and DocIdSetIterator-based filters. Backwards-compatibility with old BitSet-based filters is ensured. (Paul Elschot via Michael Busch) 15. LUCENE-1295: Added new method to MoreLikeThis for retrieving interesting terms and made retrieveTerms(int) public. (Grant Ingersoll) 16. LUCENE-1298: MoreLikeThis can now accept a custom Similarity (Grant Ingersoll) 17. LUCENE-1297: Allow other string distance measures for the SpellChecker (Thomas Morton via Otis Gospodnetic) 18. LUCENE-1001: Provide access to Payloads via Spans. All existing Span Query implementations in Lucene implement. (Mark Miller, Grant Ingersoll) 19. LUCENE-1354: Provide programmatic access to CheckIndex (Grant Ingersoll, Mike McCandless) 20. LUCENE-1279: Add support for Collators to RangeFilter/Query and Query Parser. (Steve Rowe via Grant Ingersoll) Optimizations 1.) 2. LUCENE-1120: Speed up merging of term vectors by bulk-copying the raw bytes for each contiguous range of non-deleted documents. (Mike McCandless) 3. LUCENE-1185: Avoid checking if the TermBuffer 'scratch' in SegmentTermEnum is null for every call of scanTo(). (Christian Kohlschuetter via Michael Busch) 4. LUCENE-1217: Internal to Field.java, use isBinary instead of runtime type checking for possible speedup of binaryValue(). (Eks Dev via Mike McCandless) 5. LUCENE-1183: Optimized TRStringDistance class (in contrib/spell) that uses less memory than the previous version. (Cédrik LIME via Otis Gospodnetic) 6.. (Michael Busch) Documentation 1. LUCENE-1236: Added some clarifying remarks to EdgeNGram*.java (Hiroaki Kawai via Grant Ingersoll) 2. LUCENE-1157 and LUCENE-1256: HTML changes log, created automatically from CHANGES.txt. This HTML file is currently visible only via developers page. (Steven Rowe via Doron Cohen) 3. LUCENE-1349: Fieldable can now be changed without breaking backward compatibility rules (within reason. See the note at the top of this file and also on Fieldable.java). (Grant Ingersoll) 4. LUCENE-1873: Update documentation to reflect current Contrib area status. (Steven Rowe, Mark Miller) Build 1. LUCENE-1153: Added JUnit JAR to new lib directory. Updated build to rely on local JUnit instead of ANT/lib. 2. LUCENE-1202: Small fixes to the way Clover is used to work better with contribs. Of particular note: a single clover db is used regardless of whether tests are run globally or in the specific contrib directories. 3. LUCENE-1353: Javacc target in contrib/miscellaneous for generating the precedence query parser. Test Cases) 2. LUCENE-1348: relax TestTimeLimitedCollector to not fail due to timeout exceeded (just because test machine is very busy). ======================= Release 2.3.2 ======================= Bug fixes 1. LUCENE-1191: On hitting OutOfMemoryError in any index-modifying methods in IndexWriter, do not commit any further changes to the index to prevent risk of possible corruption. (Mike McCandless) 2. LUCENE-1197: Fixed issue whereby IndexWriter would flush by RAM too early when TermVectors were in use. (Mike McCandless) 3. LUCENE-1198: Don't corrupt index if an exception happens inside DocumentsWriter.init (Mike McCandless) 4. LUCENE-1199: Added defensive check for null indexReader before calling close in IndexModifier.close() (Mike McCandless) 5. LUCENE-1200: Fix rare deadlock case in addIndexes* when ConcurrentMergeScheduler is in use (Mike McCandless) 6. LUCENE-1208: Fix deadlock case on hitting an exception while processing a document that had triggered a flush (Mike McCandless) 7. LUCENE-1210: Fix deadlock case on hitting an exception while starting a merge when using ConcurrentMergeScheduler (Mike McCandless) 8. LUCENE-1222: Fix IndexWriter.doAfterFlush to always be called on flush (Mark Ferguson via Mike McCandless) 9. LUCENE-1226: Fixed IndexWriter.addIndexes(IndexReader[]) to commit successfully created compound files. (Michael Busch) 10. LUCENE-1150: Re-expose StandardTokenizer's constants publicly; this was accidentally lost with LUCENE-966. (Nicolas Lalevée via Mike McCandless) 11. LUCENE-1262: Fixed bug in BufferedIndexReader.refill whereby on hitting an exception in readInternal, the buffer is incorrectly filled with stale bytes such that subsequent calls to readByte() return incorrect results. (Trejkaz via Mike McCandless) 12. LUCENE-1270: Fixed intermittent case where IndexWriter.close() would hang after IndexWriter.addIndexesNoOptimize had been called. (Stu Hood via Mike McCandless) Build 1. LUCENE-1230: Include *pom.xml* in source release files. (Michael Busch) ======================= Release 2.3.1 ======================= Bug fixes) ======================= Release 2.3.0 ======================= Changes in runtime behavior) 2. 1.) 2. LUCENE-963: Add setters to Field to allow for re-using a single Field instance during indexing. This is a sizable performance gain, especially for small documents. (Mike McCandless) 3.) 4.) 5.) 6. LUCENE-743: Add IndexReader.reopen() method that re-opens an existing IndexReader (see New features -> 8.) (Michael Busch) 7. LUCENE-1062: Add setData(byte[] data), setData(byte[] data, int offset, int length), getData(), getOffset() and clone() methods to o.a.l.index.Payload. Also add the field name as arg to Similarity.scorePayload(). (Michael Busch) 8. LUCENE-982: Add IndexWriter.optimize(int maxNumSegments) method to "partially optimize" an index down to maxNumSegments segments. (Mike McCandless) 9. LUCENE-1080: Changed Token.DEFAULT_TYPE to be public. 10. LUCENE-1064: Changed TopDocs constructor to be public. (Shai Erera via Michael Busch) 11. LUCENE-1079: DocValues cleanup: constructor now has no params, and getInnerArray() now throws UnsupportedOperationException (Doron Cohen) 12. LUCENE-1089: Added PriorityQueue.insertWithOverflow, which returns the Object (if any) that was bumped from the queue to allow re-use. (Shai Erera via Mike McCandless) 13. LUCENE-1101: Token reuse 'contract' (defined LUCENE-969) modified so it is token producer's responsibility to call Token.clear(). (Doron Cohen) 14. LUCENE-1118: Changed StandardAnalyzer to skip too-long (default > 255 characters) tokens. You can increase this limit by calling StandardAnalyzer.setMaxTokenLength(...). (Michael McCandless) Bug fixes 1. LUCENE-933: QueryParser fixed to not produce empty sub BooleanQueries "()" even if the Analyzer produced no tokens for input. (Doron Cohen) 2. LUCENE-955: Fixed SegmentTermPositions to work correctly with the first term in the dictionary. (Michael Busch) 3. LUCENE-951: Fixed NullPointerException in MultiLevelSkipListReader that was thrown after a call of TermPositions.seek(). (Rich Johnson via Michael Busch) 4. LUCENE-938: Fixed cases where an unhandled exception in IndexWriter's methods could cause deletes to be lost. (Steven Parkes via Mike McCandless) 5. LUCENE-962: Fixed case where an unhandled exception in IndexWriter.addDocument or IndexWriter.updateDocument could cause unreferenced files in the index to not be deleted (Steven Parkes via Mike McCandless) 6. LUCENE-957: RAMDirectory fixed to properly handle directories larger than Integer.MAX_VALUE. (Doron Cohen) 7.) 8. LUCENE-970: FilterIndexReader now implements isOptimized(). Before a call of isOptimized() would throw a NPE. (Michael Busch) 9. LUCENE-832: ParallelReader fixed to not throw NPE if isCurrent(), isOptimized() or getVersion() is called. (Michael Busch) 10. LUCENE-948: Fix FNFE exception caused by stale NFS client directory listing caches when writers on different machines are sharing an index over NFS and using a custom deletion policy (Mike McCandless) 11. LUCENE-978: Ensure TermInfosReader, FieldsReader, and FieldsReader close any streams they had opened if an exception is hit in the constructor. (Ning Li via Mike McCandless)) 13. LUCENE-991: The explain() method of BoostingTermQuery had errors when no payloads were present on a document. (Peter Keegan via Grant Ingersoll) 14. LUCENE-992: Fixed IndexWriter.updateDocument to be atomic again (this was broken by LUCENE-843). (Ning Li via Mike McCandless) 15. LUCENE-1008: Fixed corruption case when document with no term vector fields is added after documents with term vector fields. This bug was introduced with LUCENE-843. (Grant Ingersoll via Mike McCandless) 16. LUCENE-1006: Fixed QueryParser to accept a "" field value (zero length quoted string.) (yonik) 17.) 19. LUCENE-1009: Fix merge slowdown with LogByteSizeMergePolicy when autoCommit=false and documents are using stored fields and/or term vectors. (Mark Miller via Mike McCandless) 20. LUCENE-1011: Fixed corruption case when two or more machines, sharing an index over NFS, can be writers in quick succession. (Patrick Kimber via Mike McCandless) 21. LUCENE-1028: Fixed Weight serialization for few queries: DisjunctionMaxQuery, ValueSourceQuery, CustomScoreQuery. Serialization check added for all queries. (Kyle Maxwell via Doron Cohen) 22. LUCENE-1048: Fixed incorrect behavior in Lock.obtain(...) when the timeout argument is very large (eg Long.MAX_VALUE). Also added Lock.LOCK_OBTAIN_WAIT_FOREVER constant to never timeout. (Nikolay Diakov via Mike McCandless) 23. LUCENE-1050: Throw LockReleaseFailedException in Simple/NativeFSLockFactory if we fail to delete the lock file when releasing the lock. (Nikolay Diakov via Mike McCandless) 24. LUCENE-1071: Fixed SegmentMerger to correctly set payload bit in the merged segment. (Michael Busch) 25.) 26.) 27.) 28. LUCENE-749: ChainedFilter behavior fixed when logic of first filter is ANDNOT. (Antonio Bruno via Doron Cohen) 29. LUCENE-508: Make sure SegmentTermEnum.prev() is accurate (= last term) after next() returns false. (Steven Tamm via Mike McCandless) New features 1. LUCENE-906: Elision filter for French. (Mathieu Lecarme via Otis Gospodnetic) 2. LUCENE-960: Added a SpanQueryFilter and related classes to allow for not only filtering, but knowing where in a Document a Filter matches (Grant Ingersoll). 3.1 LUCENE-1038: Added setDocumentNumber() method to TermVectorMapper to provide information about what document is being accessed. (Karl Wettin via Grant Ingersoll) 4. LUCENE-975: Added PositionBasedTermVectorMapper that allows for position based lookup of term vector information. See item #3 above (LUCENE-868). 5.) 6. LUCENE-1015: Added FieldCache extension (ExtendedFieldCache) to support doubles and longs. Added support into SortField for sorting on doubles and longs as well. (Grant Ingersoll)) 8.) 9. LUCENE-1040: CharArraySet useful for efficiently checking set membership of text specified by char[]. (yonik) 10. LUCENE-1073: Created SnapshotDeletionPolicy to facilitate taking a live backup of an index without pausing indexing. (Mike McCandless) 11. LUCENE-1019: CustomScoreQuery enhanced to support multiple ValueSource queries. (Kyle Maxwell via Doron Cohen) 12.) 13. LUCENE-1380: Added TokenFilter for setting position increment in special cases related to the ShingleFilter (Mck SembWever, Steve Rowe, Karl Wettin via Grant Ingersoll) Optimizations 1. LUCENE-937: CachingTokenFilter now uses an iterator to access the Tokens that are cached in the LinkedList. This increases performance significantly, especially when the number of Tokens is large. (Mark Miller via Michael Busch)) 3. LUCENE-892: Fixed extra "buffer to buffer copy" that sometimes takes place when using compound files. (Mike McCandless) 4. LUCENE-959: Remove synchronization in Document (yonik) 5. LUCENE-963: Add setters to Field to allow for re-using a single Field instance during indexing. This is a sizable performance gain, especially for small documents. (Mike McCandless) 6. LUCENE-939: Check explicitly for boundary conditions in FieldInfos and don't rely on exceptions. (Michael Busch) 7. LUCENE-966: Very substantial speedups (~6X faster) for StandardTokenizer (StandardAnalyzer) by using JFlex instead of JavaCC to generate the tokenizer. (Stanislaw Osinski via Mike McCandless) 8. LUCENE-969: Changed core tokenizers & filters to re-use Token and TokenStream instances when possible to improve tokenization performance (~10-15%). (Mike McCandless) 9. LUCENE-871: Speedup ISOLatin1AccentFilter (Ian Boston via Mike McCandless) 10.) 11. LUCENE-1007: Allow flushing in IndexWriter to be triggered by either RAM usage or document count or both (whichever comes first), by adding symbolic constant DISABLE_AUTO_FLUSH to disable one of the flush triggers. (Ning Li via Mike McCandless) 12. LUCENE-1043: Speed up merging of stored fields by bulk-copying the raw bytes for each contiguous range of non-deleted documents. (Robert Engels via Mike McCandless) 13. LUCENE-693: Speed up nested conjunctions (~2x) that match many documents, and a slight performance increase for top level conjunctions. (yonik) 14. LUCENE-1098: Make inner class StandardAnalyzer.SavedStreams static and final. (Nathan Beyer via Michael Busch) Documentation 1. LUCENE-1051: Generate separate javadocs for core, demo and contrib classes, as well as an unified view. Also add an appropriate menu structure to the website. (Michael Busch) 2. LUCENE-746: Fix error message in AnalyzingQueryParser.getPrefixQuery. (Ronnie Kolehmainen via Michael Busch) Build 1. LUCENE-908: Improvements and simplifications for how the MANIFEST file and the META-INF dir are created. (Michael Busch) 2. LUCENE-935: Various improvements for the maven artifacts. Now the artifacts also include the sources as .jar files. (Michael Busch) 3.) 4. LUCENE-935: Defined property "m2.repository.url" to allow setting the url to a maven remote repository to deploy to. (Michael Busch) 5. LUCENE-1051: Include javadocs in the maven artifacts. (Michael Busch) 6. LUCENE-1055: Remove gdata-server from build files and its sources from trunk. (Michael Busch) 7. LUCENE-935: Allow to deploy maven artifacts to a remote m2 repository via scp and ssh authentication. (Michael Busch) 8. ======================= Changes in runtime behavior API Changes 1.) 2. LUCENE-811: make SegmentInfos class, plus a few methods from related classes, package-private again (they were unnecessarily made public as part of LUCENE-701). (Mike McCandless) 3.) 4. LUCENE-818: changed most public methods of IndexWriter, IndexReader (and its subclasses), FieldsReader and RAMDirectory to throw AlreadyClosedException if they are accessed after being closed. (Mike McCandless) 5. LUCENE-834: Changed some access levels for certain Span classes to allow them to be overridden. They have been marked expert only and not for public consumption. (Grant Ingersoll) 6. LUCENE-796: Removed calls to super.* from various get*Query methods in MultiFieldQueryParser, in order to allow sub-classes to override them. (Steven Parkes via Otis Gospodnetic) 7. LUCENE-857: Removed caching from QueryFilter and deprecated QueryFilter in favour of QueryWrapperFilter or QueryWrapperFilter + CachingWrapperFilter combination when caching is desired. (Chris Hostetter, Otis Gospodnetic) 8. LUCENE-869: Changed FSIndexInput and FSIndexOutput to inner classes of FSDirectory to enable extensibility of these classes. (Michael Busch) 9. LUCENE-580: Added the public method reset() to TokenStream. This method does nothing by default, but may be overwritten by subclasses to support consuming the TokenStream more than once. (Michael Busch) 10. LUCENE-580: Added a new constructor to Field that takes a TokenStream as argument, available as tokenStreamValue(). This is useful to avoid the need of "dummy analyzers" for pre-analyzed fields. (Karl Wettin, Michael Busch) 11.) 12. LUCENE-888: Added Directory.openInput(File path, int bufferSize) to optionally specify the size of the read buffer. Also added BufferedIndexInput.setBufferSize(int) to change the buffer size. (Mike McCandless) 13. LUCENE-923: Make SegmentTermPositionVector package-private. It does not need to be public because it implements the public interface TermPositionVector. (Michael Busch) Bug fixes 1. LUCENE-804: Fixed build.xml to pack a fully compilable src dist. (Doron Cohen) 2. LUCENE-813: Leading wildcard fixed to work with trailing wildcard. Query parser modified to create a prefix query only for the case that there is a single trailing wildcard (and no additional wildcard or '?' in the query text). (Doron Cohen) 3.) 4. LUCENE-821: The new single-norm-file introduced by LUCENE-756 failed to reduce the number of open descriptors since it was still opened once per field with norms. (yonik) 5. LUCENE-823: Make sure internal file handles are closed when hitting an exception (eg disk full) while flushing deletes in IndexWriter's mergeSegments, and also during IndexWriter.addIndexes. (Mike McCandless) 6. LUCENE-825: If directory is removed after FSDirectory.getDirectory() but before IndexReader.open you now get a FileNotFoundException like Lucene pre-2.1 (before this fix you got an NPE). (Mike McCandless) 7.) 8. LUCENE-372: QueryParser.parse() now ensures that the entire input string is consumed. Now a ParseException is thrown if a query contains too many closing parentheses. (Andreas Neumann via Michael Busch) 9. LUCENE-814: javacc build targets now fix line-end-style of generated files. Now also deleting all javacc generated files before calling javacc. (Steven Parkes, Doron Cohen) 10. LUCENE-829: close readers in contrib/benchmark. (Karl Wettin, Doron Cohen) 11. LUCENE-828: Minor fix for Term's equal(). (Paul Cowan via Otis Gospodnetic) 12.) 13. LUCENE-736: Sloppy phrase query with repeating terms matches wrong docs. For example query "B C B"~2 matches the doc "A B C D E". (Doron Cohen) 14.) 15. LUCENE-880: Fixed DocumentWriter to close the TokenStreams after it has written the postings. Then the resources associated with the TokenStreams can safely be released. (Michael Busch) 16. LUCENE-883: consecutive calls to Spellchecker.indexDictionary() won't insert terms twice anymore. (Daniel Naber) 17. LUCENE-881: QueryParser.escape() now also escapes the characters '|' and '&' which are part of the queryparser syntax. (Michael Busch) 18. LUCENE-886: Spellchecker clean up: exceptions aren't printed to STDERR anymore and ignored, but re-thrown. Some javadoc improvements. (Daniel Naber) 19. LUCENE-698: FilteredQuery now takes the query boost into account for scoring. (Michael Busch) 20. LUCENE-763: Spellchecker: LuceneDictionary used to skip first word in enumeration. (Christian Mallwitz via Daniel Naber) 21. LUCENE-903: FilteredQuery explanation inaccuracy with boost. Explanation tests now "deep" check the explanation details. (Chris Hostetter, Doron Cohen) 22. LUCENE-912: DisjunctionMaxScorer first skipTo(target) call ignores the skip target param and ends up at the first match. (Sudaakeran B. via Chris Hostetter & Doron Cohen) 23. LUCENE-913: Two consecutive score() calls return different scores for Boolean Queries. (Michael Busch, Doron Cohen) 24. 1. LUCENE-759: Added two n-gram-producing TokenFilters. (Otis Gospodnetic) 2. LUCENE-822: Added FieldSelector capabilities to Searchable for use with RemoteSearcher, and other Searchable implementations. (Mark Miller, Grant Ingersoll)) 4. LUCENE-834: Added BoostingTermQuery which can boost scores based on the values of a payload (see #3 above.) (Grant Ingersoll) 5. LUCENE-834: Similarity has a new method for scoring payloads called scorePayloads that can be overridden to take advantage of payload storage (see #3 above) 6. LUCENE-834: Added isPayloadAvailable() onto TermPositions interface and implemented it in the appropriate places (Grant Ingersoll) 7. LUCENE-853: Added RemoteCachingWrapperFilter to enable caching of Filters on the remote side of the RMI connection. (Matt Ericson via Otis Gospodnetic) 8. LUCENE-446: Added Solr's search.function for scores based on field values, plus CustomScoreQuery for simple score (post) customization. (Yonik Seeley, Doron Cohen) 1. LUCENE-761: The proxStream is now cloned lazily in SegmentTermPositions when nextPosition() is called for the first time. This allows using instances of SegmentTermPositions instead of SegmentTermDocs without additional costs. (Michael Busch) 2. LUCENE-431: RAMInputStream and RAMOutputStream extend IndexInput and IndexOutput directly now. This avoids further buffering and thus avoids unnecessary array copies. (Michael Busch) 3.) 4. LUCENE-882: Spellchecker doesn't store the ngrams anymore but only indexes them to keep the spell index small. (Daniel Naber) 5. LUCENE-430: Delay allocation of the buffer after a clone of BufferedIndexInput. Together with LUCENE-888 this will allow to adjust the buffer size dynamically. (Paul Elschot, Michael Busch) 6.) 7. 1. LUCENE 791 && INFRA-1173: Infrastructure moved the Wiki to Updated the links in the docs and wherever else I found references. (Grant Ingersoll, Joe Schaefer) 2. LUCENE-807: Fixed the javadoc for ScoreDocComparator.compare() to be consistent with java.util.Comparator.compare(): Any integer is allowed to be returned instead of only -1/0/1. (Paul Cowan via Michael Busch) 3.) 4. LUCENE-740: Added SNOWBALL-LICENSE.txt to the snowball package and a remark about the license to NOTICE.TXT. (Steven Parkes via Michael Busch) 5. LUCENE-925: Added analysis package javadocs. (Grant Ingersoll and Doron Cohen) 6. LUCENE-926: Added document package javadocs. (Grant Ingersoll) Build 1. LUCENE-802: Added LICENSE.TXT and NOTICE.TXT to Lucene jars. (Steven Parkes via Michael Busch) 2. LUCENE-885: "ant test" now includes all contrib tests. The new "ant test-core" target can be used to run only the Core (non contrib) tests. (Chris Hostetter) 3. LUCENE-900: "ant test" now enables Java assertions (in Lucene packages). (Doron Cohen) 4. LUCENE-894: Add custom build file for binary distributions that includes targets to build the demos. (Chris Hostetter, Michael Busch) 5. LUCENE-904: The "package" targets in build.xml now also generate .md5 checksum files. (Chris Hostetter, Michael Busch) 6. LUCENE-907: Include LICENSE.TXT and NOTICE.TXT in the META-INF dirs of demo war, demo jar, and the contrib jars. (Michael Busch) 7. LUCENE-909: Demo targets for running the demo. (Doron Cohen)) 9. LUCENE-930: Various contrib building improvements to ensure contrib dependencies are met, and test compilation errors fail the build. (Steven Parkes, Chris Hostetter) 10. LUCENE-622: Add ant target and pom.xml files for building maven artifacts of the Lucene core and the contrib modules. (Sami Siren, Karl Wettin, Michael Busch) ======================= Release 2.1.0 =======================) 2. LUCENE-478: Updated the list of Unicode code point ranges for CJK (now split into CJ and K) in StandardAnalyzer. (John Wang and Steven Rowe via Otis Gospodnetic) 3. Modified some CJK Unicode code point ranges in StandardTokenizer.jj, and added a few more of them to increase CJK character coverage. Also documented some of the ranges. (Otis Gospodnetic) 4. LUCENE-489: Add support for leading wildcard characters (*, ?) to QueryParser. Default is to disallow them, as before. (Steven Parkes via Otis Gospodnetic) 5. LUCENE-703: QueryParser changed to default to use of ConstantScoreRangeQuery for range queries. Added useOldRangeQuery property to QueryParser to allow selection of old RangeQuery class if required. (Mark Harwood) 6. LUCENE-543: WildcardQuery now performs a TermQuery if the provided term does not contain a wildcard character (? or *), when previously a StringIndexOutOfBoundsException was thrown. (Michael Busch via Erik Hatcher) 7. LUCENE-726: Removed the use of deprecated doc.fields() method and Enumeration. (Michael Busch via Otis Gospodnetic) 8.) 9. 1. LUCENE-503: New ThaiAnalyzer and ThaiWordFilter in contrib/analyzers (Samphan Raruenrom via Chris Hostetter) 2. LUCENE-545: New FieldSelector API and associated changes to IndexReader and implementations. New Fieldable interface for use with the lazy field loading mechanism. (Grant Ingersoll and Chuck Williams via Grant Ingersoll) 3. LUCENE-676: Move Solr's PrefixFilter to Lucene core. (Yura Smolsky, Yonik Seeley) 4. LUCENE-678: Added NativeFSLockFactory, which implements locking using OS native locking (via java.nio.*). (Michael McCandless via Yonik Seeley) 5. LUCENE-544: Added the ability to specify different boosts for different fields when using MultiFieldQueryParser (Matt Ericson via Otis Gospodnetic) 6. LUCENE-528: New IndexWriter.addIndexesNoOptimize() that doesn't optimize the index when adding new segments, only performing merges as needed. (Ning Li via Yonik Seeley) 7. LUCENE-573: QueryParser now allows backslash escaping in quoted terms and phrases. (Michael Busch via Yonik Seeley) 8. LUCENE-716: QueryParser now allows specification of Unicode characters in terms via a unicode escape of the form \uXXXX (Michael Busch via Yonik Seeley) 9. LUCENE-709: Added RAMDirectory.sizeInBytes(), IndexWriter.ramSizeInBytes() and IndexWriter.flushRamSegments(), allowing applications to control the amount of memory used to buffer documents. (Chuck Williams via Yonik Seeley) 10. LUCENE-723: QueryParser now parses *:* as MatchAllDocsQuery (Yonik Seeley) 11. LUCENE-741: Command-line utility for modifying or removing norms on fields in an existing index. This is mostly based on LUCENE-496 and lives in contrib/miscellaneous. (Chris Hostetter, Otis Gospodnetic) 12. LUCENE-759: Added NGramTokenizer and EdgeNGramTokenizer classes and their passing unit tests. (Otis Gospodnetic) 13.) 14. LUCENE-762: Added in SIZE and SIZE_AND_BREAK FieldSelectorResult options which allow one to retrieve the size of a field without retrieving the actual field. (Chuck Williams via Grant Ingersoll) 15. LUCENE-799: Properly handle lazy, compressed fields. (Mike Klaas via Grant Ingersoll) API Changes 1. LUCENE-438: Remove "final" from Token, implement Cloneable, allow changing of termText via setTermText(). (Yonik Seeley) 2. org.apache.lucene.analysis.nl.WordlistLoader has been deprecated and is supposed to be replaced with the WordlistLoader class in package org.apache.lucene.analysis (Daniel Naber) 3. LUCENE-609: Revert return type of Document.getField(s) to Field for backward compatibility, added new Document.getFieldable(s) for access to new lazy loaded fields. (Yonik Seeley) 4. LUCENE-608: Document.fields() has been deprecated and a new method Document.getFields() has been added that returns a List instead of an Enumeration (Daniel Naber) 5. LUCENE-605: New Explanation.isMatch() method and new ComplexExplanation subclass allows explain methods to produce Explanations which model "matching" independent of having a positive value. (Chris Hostetter) 6.) 7.) 8.) 9. LUCENE-657: Made FuzzyQuery non-final and inner ScoreTerm protected. (Steven Parkes via Otis Gospodnetic) 10.) 11. LUCENE-722: DEFAULT_MIN_DOC_FREQ was misspelled DEFALT_MIN_DOC_FREQ in Similarity's MoreLikeThis class. The misspelling has been replaced by the correct spelling. (Andi Vajda via Daniel Naber) 12.) 13.) 14. LUCENE-732: DateTools support has been added to QueryParser, with setters for both the default Resolution, and per-field Resolution. For backwards compatibility, DateField is still used if no Resolutions are specified. (Michael Busch via Chris Hostetter) 15. Added isOptimized() method to IndexReader. (Otis Gospodnetic) 16. LUCENE-773: Deprecate the FSDirectory.getDirectory(*) methods that take a boolean "create" argument. Instead you should use IndexWriter's "create" argument to create a new index. (Mike McCandless) 17. LUCENE-780: Add a static Directory.copy() method to copy files from one Directory to another. (Jiri Kuhn via Mike McCandless) 18. LUCENE-773: Added Directory.clearLock(String name) to forcefully remove an old lock. The default implementation is to ask the lockFactory (if non null) to clear the lock. (Mike McCandless) 19. LUCENE-795: Directory.renameFile() has been deprecated as it is not used anymore inside Lucene. (Daniel Naber) Bug fixes 1. Fixed the web application demo (built with "ant war-demo") which didn't work because it used a QueryParser method that had been removed (Daniel Naber) 2. LUCENE-583: ISOLatin1AccentFilter fails to preserve positionIncrement (Yonik Seeley) 3. LUCENE-575: SpellChecker min score is incorrectly changed by suggestSimilar (Karl Wettin via Yonik Seeley) 4. LUCENE-587: Explanation.toHtml was producing malformed HTML (Chris Hostetter) 5. Fix to allow MatchAllDocsQuery to be used with RemoteSearcher (Yonik Seeley) 6. LUCENE-601: RAMDirectory and RAMFile made Serializable (Karl Wettin via Otis Gospodnetic) 7. LUCENE-557: Fixes to BooleanQuery and FilteredQuery so that the score Explanations match up with the real scores. (Chris Hostetter) 8. LUCENE-607: ParallelReader's TermEnum fails to advance properly to new fields (Chuck Williams, Christian Kohlschuetter via Yonik Seeley) 9. LUCENE-610,LUCENE-611: Simple syntax changes to allow compilation with ecj: disambiguate inner class scorer's use of doc() in BooleanScorer2, other test code changes. (DM Smith via Yonik Seeley) 10. LUCENE-451: All core query types now use ComplexExplanations so that boosts of zero don't confuse the BooleanWeight explain method. (Chris Hostetter) 11. LUCENE-593: Fixed LuceneDictionary's inner Iterator (Kåre Fiedler Christiansen via Otis Gospodnetic) 12. LUCENE-641: fixed an off-by-one bug with IndexWriter.setMaxFieldLength() (Daniel Naber) 13. LUCENE-659: Make PerFieldAnalyzerWrapper delegate getPositionIncrementGap() to the correct analyzer for the field. (Chuck Williams via Yonik Seeley) 14. LUCENE-650: Fixed NPE in Locale specific String Sort when Document has no value. (Oliver Hutchison via Chris Hostetter) 15. LUCENE-683: Fixed data corruption when reading lazy loaded fields. (Yonik Seeley) 16. LUCENE-678: Fixed bug in NativeFSLockFactory which caused the same lock to be shared between different directories. (Michael McCandless via Yonik Seeley) 17. LUCENE-690: Fixed thread unsafe use of IndexInput by lazy loaded fields. (Yonik Seeley) 18. LUCENE-696: Fix bug when scorer for DisjunctionMaxQuery has skipTo() called on it before next(). (Yonik Seeley) 19. LUCENE-569: Fixed SpanNearQuery bug, for 'inOrder' queries it would fail to recognize ordered spans if they overlapped with unordered spans. (Paul Elschot via Chris Hostetter) 20. LUCENE-706: Updated fileformats.xml|html concerning the docdelta value in the frequency file. (Johan Stuyts, Doron Cohen via Grant Ingersoll) 21. LUCENE-715: Fixed private constructor in IndexWriter.java to properly release the acquired write lock if there is an IOException after acquiring the write lock but before finishing instantiation. (Matthew Bogosian via Mike McCandless) 22. LUCENE-651: Multiple different threads requesting the same FieldCache entry (often for Sorting by a field) at the same time caused multiple generations of that entry, which was detrimental to performance and memory use. (Oliver Hutchison via Otis Gospodnetic) 23. LUCENE-717: Fixed build.xml not to fail when there is no lib dir. (Doron Cohen via Otis Gospodnetic) 24. LUCENE-728: Removed duplicate/old MoreLikeThis and SimilarityQueries classes from contrib/similarity, as their new home is under contrib/queries. (Otis Gospodnetic) 25.) 26.). 27. LUCENE-129: Change finalizers to do "try {...} finally {super.finalize();}" to make sure we don't miss finalizers in classes above us. (Esmond Pitt via Mike McCandless) 28. LUCENE-754: Fix a problem introduced by LUCENE-651, causing IndexReaders to hang around forever, in addition to not fixing the original FieldCache performance problem. (Chris Hostetter, Yonik Seeley) 29.) 30. LUCENE-768: Fix case where an Exception during deleteDocument, undeleteAll or setNorm in IndexReader could leave the reader in a state where close() fails to release the write lock. (Mike McCandless) 31. Remove "tvp" from known index file extensions because it is never used. (Nicolas Lalevée via Bernhard Messer) 32. 1. LUCENE-586: TermDocs.skipTo() is now more efficient for multi-segment indexes. This will improve the performance of many types of queries against a non-optimized index. (Andrew Hudson via Yonik Seeley) 2. LUCENE-623: RAMDirectory.close now nulls out its reference to all internal "files", allowing them to be GCed even if references to the RAMDirectory itself still exist. (Nadav Har'El via Chris Hostetter) 3. LUCENE-629: Compressed fields are no longer uncompressed and recompressed during segment merges (e.g. during indexing or optimizing), thus improving performance . (Michael Busch via Otis Gospodnetic) 4. LUCENE-388: Improve indexing performance when maxBufferedDocs is large by keeping a count of buffered documents rather than counting after each document addition. (Doron Cohen, Paul Smith, Yonik Seeley) 5. Modified TermScorer.explain to use TermDocs.skipTo() instead of looping through docs. (Grant Ingers) 7. Lazy loaded fields unnecessarily retained an extra copy of loaded String data. (Yonik Seeley) 8. LUCENE-443: ConjunctionScorer performance increase. Speed up any BooleanQuery with more than one mandatory clause. (Abdul Chaudhry, Paul Elschot via Yonik Seeley) 9. LUCENE-365: DisjunctionSumScorer performance increase of ~30%. Speeds up queries with optional clauses. (Paul Elschot via Yonik Seeley) 10. LUCENE-695: Optimized BufferedIndexInput.readBytes() for medium size buffers, which will speed up merging and retrieving binary and compressed fields. (Nadav Har'El via Yonik Seeley) 11. LUCENE-687: Lazy skipping on proximity file speeds up most queries involving term positions, including phrase queries. (Michael Busch via Yonik Seeley) 12. LUCENE-714: Replaced 2 cases of manual for-loop array copying with calls to System.arraycopy instead, in DocumentWriter.java. (Nicolas Lalevee via Mike McCandless) 13. LUCENE-729: Non-recursive skipTo and next implementation of TermDocs for a MultiReader. The old implementation could recurse up to the number of segments in the index. (Yonik Seeley) 14. LUCENE-739: Improve segment merging performance by reusing the norm array across different fields and doing bulk writes of norms of segments with no deleted docs. (Michael Busch via Yonik Seeley) 15. LUCENE-745: Add BooleanQuery.clauses(), allowing direct access to the List of clauses and replaced the internal synchronized Vector with an unsynchronized List. (Yonik Seeley) 16. LUCENE-750: Remove finalizers from FSIndexOutput and move the FSIndexInput finalizer to the actual file so all clones don't register a new finalizer. (Yonik Seeley) Test Cases 1. Added TestTermScorer.java (Grant Ingersoll) 2. Added TestWindowsMMap.java (Benson Margulies via Mike McCandless) 3. LUCENE-744 Append the user.name property onto the temporary directory that is created so it doesn't interfere with other users. (Grant Ingersoll) Documentation 1. Added style sheet to xdocs named lucene.css and included in the Anakia VSL descriptor. (Grant Ingersoll) 2. Added scoring.xml document into xdocs. Updated Similarity.java scoring formula.(Grant Ingersoll and Steve Rowe. Updates from: Michael McCandless, Doron Cohen, Chris Hostetter, Doug Cutting). Issue 664. 3. Added javadocs for FieldSelectorResult.java. (Grant Ingersoll)) 5. Added in Developer and System Requirements sections under Resources (Grant Ingersoll) 6. LUCENE-713 Updated the Term Vector section of File Formats to include documentation on how Offset and Position info are stored in the TVF file. (Grant Ingersoll, Samir Abdou) 7. Added in link to Clover Test Code Coverage Reports under the Develop section in Resources (Grant Ingersoll) 8. LUCENE-748: Added details for semantics of IndexWriter.close on hitting an Exception. (Jed Wesley-Smith via Mike McCandless) 9. Added some text about what is contained in releases. (Eric Haszlakiewicz via Grant Ingersoll) 10. LUCENE-758: Fix javadoc to clarify that RAMDirectory(Directory) makes a full copy of the starting Directory. (Mike McCandless) 11. LUCENE-764: Fix javadocs to detail temporary space requirements for IndexWriter's optimize(), addIndexes(*) and addDocument(...) methods. (Mike McCandless) Build 1. Added in clover test code coverage per To enable clover code coverage, you must have clover.jar in the ANT classpath and specify -Drun.clover=true on the command line. (Michael Busch and Grant Ingersoll) 2. Added a sysproperty in common-build.xml per Lucene 752 to map java.io.tmpdir to ${build.dir}/test just like the tempDir sysproperty. 3. LUCENE-757 Added new target named init-dist that does setup for distribution of both binary and source distributions. Called by package and package-*-src ======================= Release 2.0.0 ======================= API Changes 1. All deprecated methods and fields have been removed, except DateField, which will still be supported for some time so Lucene can read its date fields from old indexes (Yonik Seeley & Grant Ingersoll) 2. DisjunctionSumScorer is no longer public. (Paul Elschot via Otis Gospodnetic) 3. Creating a Field with both an empty name and an empty value now throws an IllegalArgumentException (Daniel Naber) 4. 1. LUCENE-496: Command line tool for modifying the field norms of an existing index; added to contrib/miscellaneous. (Chris Hostetter) 2. LUCENE-577: SweetSpotSimilarity added to contrib/miscellaneous. (Chris Hostetter) Bug fixes 1. LUCENE-330: Fix issue of FilteredQuery not working properly within BooleanQuery. (Paul Elschot via Erik Hatcher) 2. LUCENE-515: Make ConstantScoreRangeQuery and ConstantScoreQuery work with RemoteSearchable. (Philippe Laflamme via Yonik Seeley) 3. Added methods to get/set writeLockTimeout and commitLockTimeout in IndexWriter. These could be set in Lucene 1.4 using a system property. This feature had been removed without adding the corresponding getter/setter methods. (Daniel Naber) 4. LUCENE-413: Fixed ArrayIndexOutOfBoundsException exceptions when using SpanQueries. (Paul Elschot via Yonik Seeley) 5. Implemented FilterIndexReader.getVersion() and isCurrent() (Yonik Seeley) 6. LUCENE-540: Fixed a bug with IndexWriter.addIndexes(Directory[]) that sometimes caused the index order of documents to change. (Yonik Seeley) 7. LUCENE-526: Fixed a bug in FieldSortedHitQueue that caused subsequent String sorts with different locales to sort identically. (Paul Cowan via Yonik Seeley) 8. LUCENE-541: Add missing extractTerms() to DisjunctionMaxQuery (Stefan Will via Yonik Seeley) 9. LUCENE-514: Added getTermArrays() and extractTerms() to MultiPhraseQuery (Eric Jain & Yonik Seeley) 10. LUCENE-512: Fixed ClassCastException in ParallelReader.getTermFreqVectors (frederic via Yonik) 11. LUCENE-352: Fixed bug in SpanNotQuery that manifested as NullPointerException when "exclude" query was not a SpanTermQuery. (Chris Hostetter) 12. LUCENE-572: Fixed bug in SpanNotQuery hashCode, was ignoring exclude clause (Chris Hostetter) 13.) 14. LUCENE-556: Added empty extractTerms() implementation to MatchAllDocsQuery and ConstantScoreQuery in order to allow their use with a MultiSearcher. (Yonik Seeley) 15. LUCENE-546: Removed 2GB file size limitations for RAMDirectory. (Peter Royal, Michael Chan, Yonik Seeley) 16. LUCENE-485: Don't hold commit lock while removing obsolete index files. (Luc Vanlerberghe via cutting) 1.9.1 Bug fixes 1. LUCENE-511: Fix a bug in the BufferedIndexOutput optimization introduced in 1.9-final. (Shay Banon & Steven Tamm via cutting) 1.9 final. Bug fixes 1. The fix that made IndexWriter.setMaxBufferedDocs(1) work had negative effects on indexing performance and has thus been reverted. The argument for setMaxBufferedDocs(int) must now at least be 2, otherwise an exception is thrown. (Daniel Naber) Optimizations 1. Optimized BufferedIndexOutput.writeBytes() to use System.arraycopy() in more cases, rather than copying byte-by-byte. (Lukas Zapletal via Cutting) 1.9 RC1 Requirements 1. To compile and use Lucene you now need Java 1.4 or later. Changes in runtime behavior 1.) 2. Changed system property from "org.apache.lucene.lockdir" to "org.apache.lucene.lockDir", so that its casing follows the existing pattern used in other Lucene system properties. (Bernhard) 3.) 4.) 5.) 6. The version of an IndexReader, as returned by getCurrentVersion() and getVersion() doesn't start at 0 anymore for new indexes. Instead, it is now initialized by the system time in milliseconds. (Bernhard Messer via Daniel Naber) 7.) 8. Fixed FieldCacheImpl to use user-provided IntParser and FloatParser, instead of using Integer and Float classes for parsing. (Yonik Seeley via Otis Gospodnetic) 9. Expert level search routines returning TopDocs and TopFieldDocs no longer normalize scores. This also fixes bugs related to MultiSearchers and score sorting/normalization. (Luc Vanlerberghe via Yonik Seeley, LUCENE-469) New features 1. Added support for stored compressed fields (patch #31149) (Bernhard Messer via Christoph) 2. Added support for binary stored fields (patch #29370) (Drew Farris and Bernhard Messer via Christoph) 3. Added support for position and offset information in term vectors (patch #18927). (Grant Ingersoll & Christoph) 4.) 5. QueryParser now correctly works with Analyzers that can return more than one token per position. For example, a query "+fast +car" would be parsed as "+fast +(car automobile)" if the Analyzer returns "car" and "automobile" at the same position whenever it finds "car" (Patch #23307). (Pierrick Brihaye, Daniel Naber) 6.) 7. Add native Directory and TermDocs implementations that work under GCJ. These require GCC 3.4.0 or later and have only been tested on Linux. Use 'ant gcj' to build demo applications. (cutting) 8.) 9. Added javadocs-internal to build.xml - bug #30360 (Paul Elschot via Otis) 10. Added RangeFilter, a more generically useful filter than DateFilter. (Chris M Hostetter via Erik) 11. Added NumberTools, a utility class indexing numeric fields. (adapted from code contributed by Matt Quail; committed by Erik) 12. Added public static IndexReader.main(String[] args) method. IndexReader can now be used directly at command line level to list and optionally extract the individual files from an existing compound index file. (adapted from code contributed by Garrett Rooney; committed by Bernhard) 13. Add IndexWriter.setTermIndexInterval() method. See javadocs. (Doug Cutting) 14. Added LucenePackage, whose static get() method returns java.util.Package, which lets the caller get the Lucene version information specified in the Lucene Jar. (Doug Cutting via Otis) 15. Added Hits.iterator() method and corresponding HitIterator and Hit objects. This provides standard java.util.Iterator iteration over Hits. Each call to the iterator's next() method returns a Hit object. (Jeremy Rayner via Erik) 16. Add ParallelReader, an IndexReader that combines separate indexes over different fields into a single virtual index. (Doug Cutting) 17. Add IntParser and FloatParser interfaces to FieldCache, so that fields in arbitrarily formats can be cached as ints and floats. (Doug Cutting) 18. Added class org.apache.lucene.index.IndexModifier which combines IndexWriter and IndexReader, so you can add and delete documents without worrying about synchronization/locking issues. (Daniel Naber) 19. Lucene can now be used inside an unsigned applet, as Lucene's access to system properties will not cause a SecurityException anymore. (Jon Schuster via Daniel Naber, bug #34359) 20. Added a new class MatchAllDocsQuery that matches all documents. (John Wang via Daniel Naber, bug #34946) 21. Added ability to omit norms on a per field basis to decrease index size and memory consumption when there are many indexed fields. See Field.setOmitNorms() (Yonik Seeley, LUCENE-448) 22. Added NullFragmenter to contrib/highlighter, which is useful for highlighting entire documents or fields. (Erik Hatcher) 23. Added regular expression queries, RegexQuery and SpanRegexQuery. Note the same term enumeration caveats apply with these queries as apply to WildcardQuery and other term expanding queries. These two new queries are not currently supported via QueryParser. (Erik Hatcher) 24. Added ConstantScoreQuery which wraps a filter and produces a score equal to the query boost for every matching document. (Yonik Seeley, LUCENE-383) 25.) 26. Added ability to specify a minimum number of optional clauses that must match in a BooleanQuery. See BooleanQuery.setMinimumNumberShouldMatch(). (Paul Elschot, Chris Hostetter via Yonik Seeley, LUCENE-395) 27. Added DisjunctionMaxQuery which provides the maximum score across its clauses. It's very useful for searching across multiple fields. (Chuck Williams via Yonik Seeley, LUCENE-323) 28. New class ISOLatin1AccentFilter that replaces accented characters in the ISO Latin 1 character set by their unaccented equivalent. (Sven Duzont via Erik Hatcher) 29. New class KeywordAnalyzer. "Tokenizes" the entire stream as a single token. This is useful for data like zip codes, ids, and some product names. (Erik Hatcher) 30. Copied LengthFilter from contrib area to core. Removes words that are too long and too short from the stream. (David Spencer via Otis and Daniel) 31.) 32. StopFilter can now ignore case when checking for stop words. (Grant Ingersoll via Yonik, LUCENE-248) 33. Add TopDocCollector and TopFieldDocCollector. These simplify the implementation of hit collectors that collect only the top-scoring or top-sorting hits. API Changes 1. Several methods and fields have been deprecated. The API documentation contains information about the recommended replacements. It is planned that most of the deprecated methods and fields will be removed in Lucene 2.0. (Daniel Naber) 2. The Russian and the German analyzers have been moved to contrib/analyzers. Also, the WordlistLoader class has been moved one level up in the hierarchy and is now org.apache.lucene.analysis.WordlistLoader (Daniel Naber) 3. The API contained methods that declared to throw an IOException but that never did this. These declarations have been removed. If your code tries to catch these exceptions you might need to remove those catch clauses to avoid compile errors. (Daniel Naber) 4. Add a serializable Parameter Class to standardize parameter enum classes in BooleanClause and Field. (Christoph) 5. Added rewrite methods to all SpanQuery subclasses that nest other SpanQuerys. This allows custom SpanQuery subclasses that rewrite (for term expansion, for example) to nest within the built-in SpanQuery classes successfully. Bug fixes 1. The JSP demo page (src/jsp/results.jsp) now properly closes the IndexSearcher it opens. (Daniel Naber) 2. Fixed a bug in IndexWriter.addIndexes(IndexReader[] readers) that prevented deletion of obsolete segments. (Christoph Goller) 3. Fix in FieldInfos to avoid the return of an extra blank field in IndexReader.getFieldNames() (Patch #19058). (Mark Harwood via Bernhard) 4. Some combinations of BooleanQuery and MultiPhraseQuery (formerly PhrasePrefixQuery) could provoke UnsupportedOperationException (bug #33161). (Rhett Sutphin via Daniel Naber) 5. Small bug in skipTo of ConjunctionScorer that caused NullPointerException if skipTo() was called without prior call to next() fixed. (Christoph) 6.) 7. Getting a lock file with Lock.obtain(long) was supposed to wait for a given amount of milliseconds, but this didn't work. (John Wang via Daniel Naber, Bug #33799) 8. Fix FSDirectory.createOutput() to always create new files. Previously, existing files were overwritten, and an index could be corrupted when the old version of a file was longer than the new. Now any existing file is first removed. (Doug Cutting) 9. Fix BooleanQuery containing nested SpanTermQuery's, which previously could return an incorrect number of hits. (Reece Wilton via Erik Hatcher, Bug #35157) 10. Fix NullPointerException that could occur with a MultiPhraseQuery inside a BooleanQuery. (Hans Hjelm and Scotty Allen via Daniel Naber, Bug #35626) 11. Fixed SnowballFilter to pass through the position increment from the original token. (Yonik Seeley via Erik Hatcher, LUCENE-437) 12.) 13. FieldsReader now looks at FieldInfo.storeOffsetWithTermVector and FieldInfo.storePositionWithTermVector and creates the Field with correct TermVector parameter. (Frank Steinmann via Bernhard, LUCENE-455) 14. Fixed WildcardQuery to prevent "cat" matching "ca??". (Xiaozheng Ma via Bernhard, LUCENE-306) 15. Fixed a bug where MultiSearcher and ParallelMultiSearcher could change the sort order when sorting by string for documents without a value for the sort field. (Luc Vanlerberghe via Yonik, LUCENE-453) 16. Fixed a sorting problem with MultiSearchers that can lead to missing or duplicate docs due to equal docs sorting in an arbitrary order. (Yonik Seeley, LUCENE-456) 17. A single hit using the expert level sorted search methods resulted in the score not being normalized. (Yonik Seeley, LUCENE-462) 18. Fixed inefficient memory usage when loading an index into RAMDirectory. (Volodymyr Bychkoviak via Bernhard, LUCENE-475) 19. Corrected term offsets returned by ChineseTokenizer. (Ray Tsang via Erik Hatcher, LUCENE-324) 20. Fixed MultiReader.undeleteAll() to correctly update numDocs. (Robert Kirchgessner via Doug Cutting, LUCENE-479) 21. Race condition in IndexReader.getCurrentVersion() and isCurrent() fixed by acquiring the commit lock. (Luc Vanlerberghe via Yonik Seeley, LUCENE-481) 22. IndexWriter.setMaxBufferedDocs(1) didn't have the expected effect, this has now been fixed. (Daniel Naber) 23. Fixed QueryParser when called with a date in local form like "[1/16/2000 TO 1/18/2000]". This query did not include the documents of 1/18/2000, i.e. the last day was not included. (Daniel Naber) 24. Removed sorting constraint that threw an exception if there were not yet any values for the sort field (Yonik Seeley, LUCENE-374) Optimizations 1. Disk usage (peak requirements during indexing and optimization) in case of compound file format has been improved. (Bernhard, Dmitry, and Christoph) 2.) 3. Removed synchronization from reading of term vectors with an IndexReader (Patch #30736). (Bernhard Messer via Christoph) 4. Optimize term-dictionary lookup to allocate far fewer terms when scanning for the matching term. This speeds searches involving low-frequency terms, where the cost of dictionary lookup can be significant. (cutting) 5. Optimize fuzzy queries so the standard fuzzy queries with a prefix of 0 now run 20-50% faster (Patch #31882). (Jonathan Hager via Daniel Naber) 6. by Paul Elschot via Christoph) 7. Use uncached access to norms when merging to reduce RAM usage. (Bug #32847). (Doug Cutting) 8.) 9. Optimize IndexWriter.addIndexes(Directory[]) when the number of added indexes is larger than mergeFactor. Previously this could result in quadratic performance. Now performance is n log(n). (Doug Cutting) 10. Speed up the creation of TermEnum for indices with multiple segments and deleted documents, and thus speed up PrefixQuery, RangeQuery, WildcardQuery, FuzzyQuery, RangeFilter, DateFilter, and sorting the first time on a field. (Yonik Seeley, LUCENE-454) 11. Optimized and generalized 32 bit floating point to byte (custom 8 bit floating point) conversions. Increased the speed of Similarity.encodeNorm() anywhere from 10% to 250%, depending on the JVM. (Yonik Seeley, LUCENE-467) Infrastructure 1. Lucene's source code repository has converted from CVS to Subversion. The new repository is at 2. Lucene's issue tracker has migrated from Bugzilla to JIRA. Lucene's JIRA is at The old issues are still available at (use the bug number instead of xxxx) 1.4.3 1.) 2. QueryParser changes in 1.4.2 broke the QueryParser API. Now the old API is supported again. (Christoph) 1.4.2 1. Fixed bug #31241: Sorting could lead to incorrect results (documents missing, others duplicated) if the sort keys were not unique and there were more than 100 matches. (Daniel Naber) 2. Memory leak in Sort code (bug #31240) eliminated. (Rafal Krzewski via Christoph and Daniel) 3.) 4. PhraseQuery and PhrasePrefixQuery now allow the explicit specification of relative positions. (Christoph Goller) 5. QueryParser changes: Fix for ArrayIndexOutOfBoundsExceptions (patch #9110); some unused method parameters removed; The ability to specify a minimum similarity for FuzzyQuery has been added. (Christoph Goller) 6. IndexSearcher optimization: a new ScoreDoc is no longer allocated for every non-zero-scoring hit. This makes 'OR' queries that contain common terms substantially faster. (cutting) 1.4.1 1. Fixed a performance bug in hit sorting code, where values were not correctly cached. (Aviran via cutting) 2. Fixed errors in file format documentation. (Daniel Naber) 1.4 final 1. Added "an" to the list of stop words in StopAnalyzer, to complement the existing "a" there. Fix for bug 28960 (). (Otis) 2. Added new class FieldCache to manage in-memory caches of field term values. (Tim Jones) 3.) 4. Changed the encoding of GermanAnalyzer.java and GermanStemmer.java to UTF-8 and changed the build encoding to UTF-8, to make changed files compile. (Otis Gospodnetic) 5. Removed synchronization from term lookup under IndexReader methods termFreq(), termDocs() or termPositions() to improve multi-threaded performance. (cutting) 6. Fix a bug where obsolete segment files were not deleted on Win32. 1.4 RC3 1. Fixed several search bugs introduced by the skipTo() changes in release 1.4RC1. The index file format was changed a bit, so collections must be re-indexed to take advantage of the skipTo() optimizations. (Christoph Goller) 2. Added new Document methods, removeField() and removeFields(). (Christoph Goller) 3. Fixed inconsistencies with index closing. Indexes and directories are now only closed automatically by Lucene when Lucene opened them automatically. (Christoph Goller) 4. Added new class: FilteredQuery. (Tim Jones) 5. Added a new SortField type for custom comparators. (Tim Jones) 6. Lock obtain timed out message now displays the full path to the lock file. (Daniel Naber via Erik) 7. Fixed a bug in SpanNearQuery when ordered. (Paul Elschot via cutting) 8. Fixed so that FSDirectory's locks still work when the java.io.tmpdir system property is null. (cutting) 9. Changed FilteredTermEnum's constructor to take no parameters, as the parameters were ignored anyway (bug #28858) 1.4 RC2 1. GermanAnalyzer now throws an exception if the stopword file cannot be found (bug #27987). It now uses LowerCaseFilter (bug #18410) (Daniel Naber via Otis, Erik) 2. Fixed a few bugs in the file format documentation. (cutting) 1.4 RC1 1.) 2. Added an optimized implementation of TermDocs.skipTo(). A skip table is now stored for each term in the .frq file. This only adds a percent or two to overall index size, but can substantially speedup many searches. (cutting) 3.) 4. Added new class ParallelMultiSearcher. Combined with RemoteSearchable this makes it easy to implement distributed search systems. (Jean-Francois Halleux via cutting) 5. Added support for hit sorting. Results may now be sorted by any indexed field. For details see the javadoc for Searcher#search(Query, Sort). (Tim Jones via Cutting) 6. Changed FSDirectory to auto-create a full directory tree that it needs by using mkdirs() instead of mkdir(). (Mladen Turk via Otis) 7. Added a new span-based query API. This implements, among other things, nested phrases. See javadocs for details. (Doug Cutting) 8.) 9. Added MultiReader, an IndexReader that combines multiple other IndexReaders. (Cutting) 10. Added support for term vectors. See Field#isTermVectorStored(). (Grant Ingersoll, Cutting & Dmitry) 11. Fixed the old bug with escaping of special characters in query strings: (Jean-Francois Halleux via Otis) 12. Added support for overriding default values for the following, using system properties: - default commit lock timeout - default maxFieldLength - default maxMergeDocs - default mergeFactor - default minMergeDocs - default write lock timeout (Otis) 13. Changed QueryParser.jj to allow '-' and '+' within tokens: (Morus Walter via Otis) 14. Changed so that the compound index format is used by default. This makes indexing a bit slower, but vastly reduces the chances of file handle problems. (Cutting) 1.3 final 1. Added catch of BooleanQuery$TooManyClauses in QueryParser to throw ParseException instead. (Erik Hatcher) 2. Fixed a NullPointerException in Query.explain(). (Doug Cutting)()). 5. Fix StandardTokenizer's handling of CJK characters (Chinese, Japanese and Korean ideograms). Previously contiguous sequences were combined in a single token, which is not very useful. Now each ideogram generates a separate token, which is more useful. 1.3 RC3 1. Added minMergeDocs in IndexWriter. This can be raised to speed indexing without altering the number of files, but only using more memory. (Julien Nioche via Otis) 2. Fix bug #24786, in query rewriting. (bschneeman via Cutting) 3. Fix bug #16952, in demo HTML parser, skip comments in javascript. (Christoph Goller) 4. Fix bug #19253, in demo HTML parser, add whitespace as needed to output (Daniel Naber via Christoph Goller) 5. Fix bug #24301, in demo HTML parser, long titles no longer hang things. (Christoph Goller) 6. Fix bug #23534, Replace use of file timestamp of segments file with an index version number stored in the segments file. This resolves problems when running on file systems with low-resolution timestamps, e.g., HFS under MacOS X. (Christoph Goller) 7. Fix QueryParser so that TokenMgrError is not thrown, only ParseException. (Erik Hatcher) 8. Fix some bugs introduced by change 11 of RC2. (Christoph Goller) 9. Fixed a problem compiling TestRussianStem. (Christoph Goller) 10. Cleaned up some build stuff. (Erik Hatcher) 1.3 RC2 1. Added getFieldNames(boolean) to IndexReader, SegmentReader, and SegmentsReader. (Julien Nioche via otis) 2. Changed file locking to place lock files in System.getProperty("java.io.tmpdir"), where all users are permitted to write files. This way folks can open and correctly lock indexes which are read-only to them. 3. IndexWriter: added a new method, addDocument(Document, Analyzer), permitting one to easily use different analyzers for different documents in the same index. 4. Minor enhancements to FuzzyTermEnum. (Christoph Goller via Otis) 5. PriorityQueue: added insert(Object) method and adjusted IndexSearcher and MultiIndexSearcher to use it. (Christoph Goller via Otis) 6. Fixed a bug in IndexWriter that returned incorrect docCount(). (Christoph Goller via Otis) 7.) 8. Added CachingWrapperFilter and PerFieldAnalyzerWrapper. (Erik Hatcher) 9. Added support for the new "compound file" index format (Dmitry Serebrennikov) 10. Added Locale setting to QueryParser, for use by date range parsing. 11.) 12. Added a limit to the number of clauses which may be added to a BooleanQuery. The default limit is 1024 clauses. This should stop most OutOfMemoryExceptions by prefix, wildcard and fuzzy queries which run amok. (cutting) 13. Add new method: IndexReader.undeleteAll(). This undeletes all deleted documents which still remain in the index. (cutting) 1.3 RC1 1. Fixed PriorityQueue's clear() method. Fix for bug 9454, (Matthijs Bomhoff via otis) 2. Changed StandardTokenizer.jj grammar for EMAIL tokens. Fix for bug 9015, (Dale Anson via otis) 3. Added the ability to disable lock creation by using disableLuceneLocks system property. This is useful for read-only media, such as CD-ROMs. (otis) 4. Added id method to Hits to be able to access the index global id. Required for sorting options. (carlson) 5. Added support for new range query syntax to QueryParser.jj. (briangoetz) 6. Added the ability to retrieve HTML documents' META tag values to HTMLParser.jj. (Mark Harwood via otis) 7. Modified QueryParser to make it possible to programmatically specify the default Boolean operator (OR or AND). (Péter Halácsy via otis) 8. Made many search methods and classes non-final, per requests. This includes IndexWriter and IndexSearcher, among others. (cutting) 9. Added class RemoteSearchable, providing support for remote searching via RMI. The test class RemoteSearchableTest.java provides an example of how this can be used. (cutting) 10. Added PhrasePrefixQuery (and supporting MultipleTermPositions). The test class TestPhrasePrefixQuery provides the usage example. (Anders Nielsen via otis) 11. Changed the German stemming algorithm to ignore case while stripping. The new algorithm is faster and produces more equal stems from nouns and verbs derived from the same word. (gschwarz) 12.)) 14.) 15. Added a new IndexWriter method, getAnalyzer(). This returns the analyzer used when adding documents to this index. (cutting) 16. Fixed a bug with IndexReader.lastModified(). Before, document deletion did not update this. Now it does. (cutting) 17. Added Russian Analyzer. (Boris Okner via otis) 18. Added a public, extensible scoring API. For details, see the javadoc for org.apache.lucene.search.Similarity. 19. Fixed return of Hits.id() from float to int. (Terry Steichen via Peter). 20. Added getFieldNames() to IndexReader and Segment(s)Reader classes. (Peter Mularien via otis) 21. Added getFields(String) and getValues(String) methods. Contributed by Rasik Pandey on 2002-10-09 (Rasik Pandey via otis) 22. Revised internal search APIs. Changes include:. Caution: These are extensive changes and they have not yet been tested extensively. Bug reports are appreciated. (cutting) 23. Added convenience RAMDirectory constructors taking File and String arguments, for easy FSDirectory to RAMDirectory conversion. (otis) 24. Added code for manual renaming of files in FSDirectory, since it has been reported that java.io.File's renameTo(File) method sometimes fails on Windows JVMs. (Matt Tucker via otis) 25. Refactored QueryParser to make it easier for people to extend it. Added the ability to automatically lower-case Wildcard terms in the QueryParser. (Tatu Saloranta via otis) 1.2 RC6 1. Changed QueryParser.jj to have "?" be a special character which allowed it to be used as a wildcard term. Updated TestWildcard unit test also. (Ralf Hettesheimer via carlson) 1.2 RC5 1. Renamed build.properties to default.properties and updated the BUILD.txt document to describe how to override the default.property settings without having to edit the file. This brings the build process closer to Scarab's build process. (jon) 2. Added MultiFieldQueryParser class. (Kelvin Tan, via otis) 3. Updated "powered by" links. (otis) 4. Fixed instruction for setting up JavaCC - Bug #7017 (otis) 5. Added throwing exception if FSDirectory could not create directory - Bug #6914 (Eugene Gluzberg via otis) 6. Update MultiSearcher, MultiFieldParse, Constants, DateFilter, LowerCaseTokenizer javadoc (otis) 7. Added fix to avoid NullPointerException in results.jsp (Mark Hayes via otis) 8. Changed Wildcard search to find 0 or more char instead of 1 or more (Lee Mallobone, via otis) 9. Fixed error in offset issue in GermanStemFilter - Bug #7412 (Rodrigo Reyes, via otis) 10. Added unit tests for wildcard search and DateFilter (otis) 11. Allow co-existence of indexed and non-indexed fields with the same name (cutting/casper, via otis) 12. Add escape character to query parser. (briangoetz) 13. Applied a patch that ensures that searches that use DateFilter don't throw an exception when no matches are found. (David Smiley, via otis) 14. Fixed bugs in DateFilter and wildcardquery unit tests. (cutting, otis, carlson)) 1.2 RC. 1.2 RC1 (first Apache release): -. 1.01b (last Sourceforge release) . a few bug fixes . new Query Parser . new prefix query (search for "foo*" matches "food") 1.0 This release fixes a few serious bugs and also includes some performance optimizations, a stemmer, and a few other minor enhancements. 0.04. 0.01 First open source release. The code has been re-organized into a new package and directory structure for this release. It builds OK, but has not been tested beyond that since the re-organization. writer.get().doXYZ() Here is a short list of links related to this Lucene CHANGES.txt source code file: A percentage of advertising revenue from pages under the /java/jwarehouse URI on this website is paid back to open source projects.
http://alvinalexander.com/java/jwarehouse/lucene/CHANGES.txt.shtml
CC-MAIN-2020-24
refinedweb
23,107
51.85
I'm learning python and i loop like this the json converted to dictionary: it works but is this the correct method? Thank you :) import json output_file = open('output.json').read() output_json = json.loads(output_file) for i in output_json: print i for k in output_json[i]: print k, output_json[i][k] print output_json['webm']['audio'] print output_json['h264']['video'] print output_json['ogg'] { "webm":{ "video": "libvp8", "audio": "libvorbis" }, "h264": { "video": "libx264", "audio": "libfaac" }, "ogg": { "video": "libtheora", "audio": "libvorbis" } } > h264 audio libfaac video libx264 ogg > audio libvorbis video libtheora webm > audio libvorbis video libvp8 libvorbis > libx264 {u'audio': u'libvorbis', > u'video': u'libtheora'} That seems generally fine. There's no need to first read the file, then use loads. You can just use load directly. output_json = json.load(open('/tmp/output.json')) Using i and k isn't correct for this. They should generally be used only for an integer loop counter. In this case they're keys, so something more appropriate would be better. Perhaps rename i as container and k as stream? Something that communicate more information will be easier to read and maintain. You can use output_json.iteritems() to iterate over both the key and the value at the same time. for majorkey, subdict in output_json.iteritems(): print majorkey for subkey, value in subdict.iteritems(): print subkey, value
https://codedump.io/share/m8GR7FOTWGwc/1/python-read-json-and-loop-dictionary
CC-MAIN-2017-09
refinedweb
220
59.4
In this article we are going to explore probability with Python with particular emphasis on discrete random variables. Discrete values are ones which can be counted as opposed to measured. This is a fundamental distinction in mathematics. Something that not everyone realises about measurements is that they can never be fully accurate. For example, if I tell you that a person’s height is 1.77m, that value has been rounded to two decimal places. If I were to measure more precisely, the height might turn out to be 1.77132m to five decimal places. This is quite precise, but in theory the precision could be improved ad infinitum. This is not the case with discrete values. They always represent an exact number. This means in some ways they are easier to work with. Discrete Random Variables A discrete random variable is a variable which only takes discrete values, determined by the outcome of some random phenomenon. Discrete random variable are often denoted by a capital letter (E.g. X, Y, Z). The probability of each value of a discrete random variable occurring is between 0 and 1, and the sum of all the probabilities is equal to 1. Some examples of discrete random variables are: - Outcome of flipping a coin - Outcome of rolling a die - Number of occupants of a household - number of students in a class - Marks in an exam - The number of applicants for a job. Discrete Probability Distributions A random variable can take different values at different times. In many situations, some values will be encountered more often than others. The description of the probability of each possible value that a discrete random variable can take is called a discrete probability distribution. The technical name for the function mapping a particular value of a discrete random variable to it’s associated probability is a probability mass function (pmf). Confused by all the terminology? Don’t worry. We’ll take a look at some examples now, and use Python to help us understand discrete probability distributions. Python Code Listing for a Discrete Probability Distribution Check out this example. You may need to install some of the modules if you haven’t already. If you are not familiar with Numpy, Matplotlib and Seaborn, allow me to introduce you… import numpy as np import matplotlib.pyplot as plt import seaborn as sns NUM_ROLLS = 1000 values = [1, 2, 3, 4, 5, 6] sample = np.random.choice(values, NUM_ROLLS) # Numpy arrays containing counts for each side side, count = np.unique(sample, return_counts=True) probs = count / len(sample) # Plot the results sns.barplot(side, probs) plt.title( f"Discrete Probability Distribution for Fair 6-Sided Die ({NUM_ROLLS} rolls)") plt.ylabel("Probability") plt.xlabel("Outcome") plt.show() In this example there is an implied random variable (let’s call it X), which can take the values 1, 2, 3, 4, 5 or 6. A sample of NUM_ROLL size is generated and the results plotted using seaborn and matplotlib. The code makes use of numpy to create a sample, and seaborn to easily create a visually clear and pleasing bar plot. Simulating a Biased Die with Python The code above can be amended just slightly to produce and display a sample for a weighted (biased) die. Here the 6 side has a probability of 0.5 while for all the other sides it is 0.1. import numpy as np import matplotlib.pyplot as plt import seaborn as sns NUM_ROLLS = 1000 values = [1, 2, 3, 4, 5, 6] probs = [0.1, 0.1, 0.1, 0.1, 0.1, 0.5] # Draw a weighted sample sample = np.random.choice(values, NUM_ROLLS, p=probs) # Numpy arrays containing counts for each side side, count = np.unique(sample, return_counts=True) probs = count / len(sample) # Plot the results sns.barplot(side, probs) plt.title( f"Discrete Probability Distribution for Biased 6-Sided Die ({NUM_ROLLS} rolls)") plt.ylabel("Probability") plt.xlabel("Outcome") plt.show() Discrete Normal Distribution of Shoe Sizes with Python Finally, let’s take a look at how we can create a normal distribution and plot it using Python, Numpy and Seaborn. Lets say that we learn women’s shoes in a particular population have a mean size of 5 with a standard deviation of 1. We can use the same code as before to plot the distribution, except that we create our sample with the following two lines instead of sample = np.random.choice(values, NUM_ROLLS, p=probs): sample = np.random.normal(loc=5, scale=1, size=NUM_ROLLS) sample = np.round(sample).astype(int) # Convert to integers Here is the result – a discreet normal distribution for women’s shoe sizes: In this article we have looked how to create and plot discrete probability distributions with Python. I hope you found it interesting and useful. Happy computing!
https://compucademy.net/discrete-probability-distributions-with-python/
CC-MAIN-2021-17
refinedweb
800
56.96
Hey! I am new to java programing and need some help with some homework, here are the requirements: Right now I am working on just setting up my arrays but it is not working. I read the size of the array from the user and then tell the user to enter the elements of the array. I am very confused and frustrated!! Please help if you can, thanks for all the advice in advance. The assignment Write a program that will provide methods for manipulating arrays. Write static methods within your main program class to do the following. Assume A, B are arrays of int. ReadArray (A) - reads int values from std input into the array A. PrintArray (A) - printlns the elements of A to std output 8 per line. int Sum (A) - returns the sum of the elements in array A. int[] AddArrays (A, B) - returns a new array where each element is the sum of the corresponding elements in A and B. int DotProduct (A, B) - returns the dot product a1*b1 + a2*b2 + ... The methods should handle arrays of any length. AddArray and DotProduct should handle two arrays of different length by treating the shorter array as if it were padded with zeros out to the length of the longer array. The main method should prompt the user to enter two arrays and their values (prompt for number of elements, create array, call Readarray). The program should then print each array and its sum, add the two arrays, print the sum of this new array, and print the dot product of the two original arrays. import java.util.Scanner; import java.io.*; public class AKL { //--------------------------------------------------------------------------- // Method main // //--------------------------------------------------------------------------- public static void main(String[] args) throws IOException { int num =0; int[] myInts1; int[] myInts2; Scanner scan = new Scanner(System.in); System.out.println("Please enter the size of ARRAY 1 : "); num = scan.nextInt(); myInts1 = new int[num]; System.out.println("Please enter the size of ARRAY 2 : "); int num1 = scan.nextInt(); myInts2 = new int[num1]; readarray(myInts1); } // ends main public static void readarray(int []array) { Scanner scan = new Scanner(System.in); for (int index = 0; index < array.length; index ++) { System.out.println("Please enter the elements for ARRAY1 : "); array[index] = scan.nextInt(); } } }//end class Edited by mike_2000_17: Fixed formatting
https://www.daniweb.com/programming/software-development/threads/96810/interger-arrays-need-help
CC-MAIN-2018-30
refinedweb
380
66.23
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava other otherwise they will be redundant and inefficient use of storage. An UPDATE statement can update multiple records by using a single statement CROSSFIRE O/R CROSSFIRE O/R ?CROSSFIRE O/R? is a product to generate Java Program instead.... * All of the JAVA programs to execute SQL(SELECT, INSERT, UPDATE, DELETE r how to code-updating some details n some details r unchanged how to code-updating some details n some details r unchanged i have... : bloodgroup : my requirement is to update the details of user. and i had written the update query,but with that query all the details are updating... my doubt CoreJava Project CoreJava Project Hi Sir, I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account ios - r Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava update a JTable - Java Beginners update a JTable how to update a JTable with Mysql data through user...; } } public Object getValueAt(int r, int c) { try { rs.absolute(r + 1); return rs.getObject(c + 1); } catch(SQLException e E-R diagram E-R diagram Hi,Hw to do draw E-R diagram for online movie ticket booking update a JTable - Java Beginners update a JTable i have tried your advice as how to update a JTable...){ e.printStackTrace(); return 0; } } public Object getValueAt(int r, int c) { try { rs.absolute(r + 1); return rs.getObject(c + 1); } catch R programming language. R programming language. i want to know about R-programming language. what is it? how to work on it? if possible provide me a ebook on the topic add a update button in the following jsp how to add a update button in the following jsp Once the excel from... to allow the user to update the excel? thanks, <%@page import="java.io.*"%>... = sheet.getPhysicalNumberOfRows(); for(int r = 0; r < rows; r How to survive google panda 4.0 update Panda 4.0 update? Thanks In this update of the Google Panda 4.0 many... 4.0 update: Check your website and remove the the web pages where content... navigable to user and minimize the bounce rate. Hope these changes will give you Java-Xml -Modify and Update Node - Java Beginners Java-Xml -Modify and Update Node test_final_1 2009-025T13:23...(); } } I want to modify the values of and in the above xml posted and update...-text-by-replacement.shtml Hope that it will be helpful for you. Thanks how to update xml from java - XML how to update xml from java hi, Im new to xml parsing and dont know much about. I need to modify the attribute val of a tag in a complex xml file...(); bufferedWriter.close(); } } We hope that this will help you in solving your problem how to update how to update conditional update update profile update profile coding for update profile update image update image sir, I want to do update image into database hi its the coding of create layout of chess in applet i hope u like it how to update combobx's selected value to database with respect toselected multiple checkboxes how to update combobx's selected value to database with respect toselected...='checkbox' id='r1' value="+r+" name='test' >"+"</td><td>"+r+"<...="update PROFILE set Route_Id='"+txt[t]+"' where Proid='"+lang[j update query update query using oops concept in php.. How to update the data from databse ? with program example want to ask how to update data for runtime selected multiple checkboxes want to ask how to update data for runtime selected multiple checkboxes ... and entered the values in the textbox then it should get update in the database.I have succesfully updated when I am clickinga single checkbox but I want to update Dynamically update the Label & set Bounds constant between 2 Label(Thread) Dynamically update the Label & set Bounds constant between 2 Label(Thread... extends javax.swing.JFrame { private int q; private String r...]); label1.setText(data[i]); button1[i].setText("Update"); label1.setBounds(110,100+i+i+i to update the information to update the information sir, i am working on library mgt project. front end is core java and backend is ms access.i want to open,update the information through form.please send me code earliar update database update database hi.. i want to know how the valuesof database can be updated in the jsf-jpa framework when the drop down button is clicked the data... that can be done there then by pressing the update buutton the value can be updated Update value Update value How to update value of database using hibernate ? Hi Samar, With the help of this code, you will see how can update database using hibernate. package net.roseindia.DAO; import to update the information update the information sir, i am working on library mgt project. front end is core java and backend is ms access.i want to open,update the information through form.please send me code earliar. Please visit Update - JDBC is what I used to update normally. It works. Please assist me. Thanks...("jdbc:odbc:Biu"); stat = con.prepareStatement("Update Biu SET itemcode...; Step1: Retrive the column value which one you want to update and store Update statement Update statement I create a access database my program When I click add button bata are adds to the my data base but when i click update button my database is not update I write this program using 3 differfnt notepad pages MY update jsp update jsp <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <!DOCTYPE html PUBLIC...;/Controller"> <center> <input type="hidden" name="page" value="update"/> data update edit/update data and saved them into that table again here we r getting the problem with ut data and get data???????????? here we r getting the problem with ut data and get data???????????? private ArrayList keys; private ArrayList values; public Menus() { keys = new ArrayList(); values = new ArrayList(); } public update php cpanel update php cpanel update php cpanel update mysql database update mysql database update mysql database coding for update profile coding for update profile coding for update profile data update Update not working in hibernate. Update not working in hibernate. Update not working in hibernate how to update image in php how to update image in php how to update image in php auto update codings auto update codings auto update coding in xml Update a resultset programmatically Update a resultset programmatically How to update a resultset programmatically Update Profile Update Profile Update Profile This facility is used for updating user... user wishes to change their information, he may proceed by clicking on Update Hibernate Update In this section we will discuss various method of update update statement in mysql update statement in mysql i am looking for mysql update statement example. Thanks update two frames at once update two frames at once How do I update two frames at once JSP - Update displayed content & session variables when user clicks on a button - JSP-Servlet JSP - Update displayed content & session variables when user clicks on a button Hi, I'm trying to setup a form in which the user can click...? } My hope is that when the user clicks on the add.jpg image The Update Statement in SQL. The Update Statement in SQL. The Update Statement in SQL. Hi, here is the answer, The update statement in the sql is written as follows- UPDATE table_name SET column_name = new_value WHERE column_name = some_value session update in php session update in php How to session update in php? $this->session->set_userdata('name', $fullname frame update another frame. frame update another frame. How do I make a link or form in one frame update another frame Foreign key update table Foreign key update table How to update table that has the foreign key in SQL..? ALTER TABLE YourTable ADD CONSTRAINT FK_YourForeignKey... (YourPrimaryKeyColumn) ON UPDATE CASCADE CASE IN UPDATE IN MYSQL CASE IN UPDATE IN MYSQL I WANT THE SYNTAX FOR USING CASE IN UPDATE STMT IN MYSQL.ANY ONE PLEASE HELP. Hi Friend, Visit here Thanks how to update the value of jslider how to update the value of jslider hello, I want to make a audio player but the jslider is not updating help me Dynamic-update not working in Hibernate. Dynamic-update not working in Hibernate. Why is dynamic update not working in hibernate? Dynamic-update is not working. It means when you are running your update, new data is added in your table in place write a program to evaluate the following investment equation make use of wrapper class v=p(1+r)rest to n write a program to evaluate the following investment equation make use of wrapper class v=p(1+r)rest to n write a program to evaluate the following investment equation make use of wrapper class v=p(1+r)rest to n servlet code to update password? servlet code to update password? Create a servlet page which helps the user to update his password. The user should give the username,old password,new password,confirm password. The updation is applicable only for valid update statement in mysql update statement in mysql Update statement to update the existing... and use the update query to update the record. To update record, we write query ?UPDATE student SET fieldName=??? WHERE fieldName=?? . You can SET value textfields and update - SQL textfields and update how can i retrieve a table from a database and put in on the textfields using jdbc?and at the same time update the items on the database Blob update in hibernate Blob update in hibernate Hi , I wanted to upload a blob into mysql database using hibernate. Could you please help me . Thanks in advance, Satchidanand Mohanty
http://www.roseindia.net/tutorialhelp/comment/11578
CC-MAIN-2014-52
refinedweb
1,643
54.42
From: VTK FAN (vtk_fan_at_[hidden]) Date: 2005-06-20 21:41:39 Hello, I was wondering if anyone could help me with this issue. I am investigating using uBLAS as the basis of container classes for solving partial differential equations. I have a discretized mesh and solution vectors associated with the mesh which I would like to store and also make use of BLAS like operations for computing sums, norms, axpy ops, transforms, scaling, slicing etc, and also be able to allocate, grow, shrink and destroy these solution vectors during runtime. The uBLAS interface to ATLAS also interested me. How easy is it to deallocate and grow/shrink uBLAS vectors? I could not find anything on the Documentation page as many links seem to be broken. Is there a big memory penalty as compared to STL's vector class or regular C arrays? Would the c_vector and c_matrix classes have any restrictions on use compared to other uBLAS containers? Is uBLAS being actively developed and will it be maintained for the next few years? And one dumb question, if I allocate a c_vector using new, how do I delete it without any memory leaks? For example can I do: namespace mySolver = boost::numeric::ublas mySolver::c_vector *pressure_vector = new mySolver::c_vector(npoints_to_be_read); ... ... bunch of solver code ... ... delete pressure_vector; pressure_vector = NULL; I want to prevent any leaks and was not sure how to allocate/delete pointers to these uBLAS templated classes. I was not able to locate any such examples on the Web using uBLAS in this fashion. Responses would be much appreciated. Thanks, Srini __________________________________________________________ Free antispam, antivirus and 1GB to save all your messages Only in Yahoo! Mail:
https://lists.boost.org/ublas/2005/06/0431.php
CC-MAIN-2020-34
refinedweb
278
63.09
Yea. I'm not sure exactly what Rust does after main (possibly just process exit 0?). One thing is that this provides a code organization for users to define an "after main" behavior, so they can throw errors to the top level and then ? them there, calling into the terminate they've defined locally. ? terminate I'm not sure if this is something people writing applications want (I don't know what the patterns are right now around what to do with errors once you've surfaced them to main), but I think we could meet those users' needs as well as making code examples nicer. After main is right here iirc: Yeah, I thought of this. I was afraid that it would fail because in practice one tends to be returning many possible errors that all have to be coerced to Box<Error> (via a From impl that is induced as part of the ? sugar). But we might be able to make it work at some point. Box<Error> From Hmm, yes, maybe not as clear as I thought at first. So for my tests I want them to mirror their production usage as closely as possible - this means using ? Because I am forced by the compiler to use the fn() -> () signature, I am then forced to unwrap (not how used in prod) or pattern match explicitly (extremely tedious, can make a mistake easier with assert, and rarely used in prod). fn() -> () Not to mention I can't even use ? in example rust code... Which seems to me the biggest way to encourage new users to not unwrap, when it isn't present in every example for a result returning function... Lastly, and this is less of an issue before ? came into being, converting code in functions that use ? for testing the implementation is a annoying to rewrite everything with unwrap. Maybe I'm missing something, but my immediate reaction to this (given that we don't have catch) is to just wrap the part that returns a Result in a separate function. catch Result fn run() -> Result<(), Box<Error>> { some_stuff()? } fn main() { run().unwrap() // or deal with the Result in some other way } This doesn't seem all that unergonomic to me, and avoids introducing some special case for main(). Or does everyone think this is too much? main() Yes, this is the "do nothing", and it should be considered. Keeping the language simpler has significant advantages in 15+ years. Alternatively: fn main() { (||{ some_stuff()? })().unwrap() } But that might be too much magic syntax. When I first encountered Rust, I was annoyed that main() didn't take command-line arguments and return an exit code, it seemed gratuitously limited. When I learned how serious Rust was about cross-platform support, it made a lot of sense: for example, Unix gives each process an array of command-line arguments, but Windows gives each process a single big string. Unix lets each process return a u8 exit code, while Windows exit codes are u32. Embedded platforms often don't have a concept of "arguments" or "exit codes" at all. u8 u32 It seems dishonest (in a way the Rust stdlib has avoided dishonesty in the past) to give main() some particular signature when we know it may not be sensible or even possible on every platform, in the way that C and C++ have done. On the other hand, because of C, most platforms will have some story for command-line arguments and exit codes, so maybe it's not a terrible idea. Perhaps we can have per-platform main wrappers in the stdlib. For example, std::os::unix::main_wrapper(): std::os::unix::main_wrapper() fn<F> main_wrapper(f: F) -> () where F: FnOnce(args: &[&str], environ: &[&str]) -> Result<u8, Error> ...and also std::os::windows::main_wrapper(): std::os::windows::main_wrapper() fn<F> main_wrapper(f: F) -> () where F: FnOnce(args: &str) -> Result<u32, Error> ...and maybe even std::os::generic_main_wrapper(): std::os::generic_main_wrapper() fn<F> generic_main_wrapper(f: F) -> () where F: FnOnce() -> Result<u8, Error> ...that does not promise your exit code will actually go anywhere, but it should be implementable everywhere. ...and then code examples could look like: use std::os::unix; fn main() { unix::main_wrapper(|_. _| { "sasquatch".parse::<u32>(); }) } That's a bit of extra boiler-plate, but hopefully not too difficult to explain or hand-wave in introductory texts... especially if the generic main_wrapper() is in the prelude. main_wrapper() Here's an alternative alias to Throws<T>: Throws<T> fn main() -> Fallible<()> { try_something()? } or even: fn main() -> Fallible { try_something()? } where T: () T: () I believe that the meaning of Fallible should be clear as well as searchable for novices. Fallible An alternative to catch can be recover. recover Yes, I like the name Fallible. I hadn't thought of the () default. That's nice, actually. =) () There is nothing dishonest going on here. Having a main() that returns a Terminate value is portable to all platforms. All platforms support unwrap(), and main() is already invoked from a wrapper; it just happens that the wrapper expects main() return () right now. Terminate unwrap() FWIW, I brought up this general idea in makes that very easy, but both the manual and the macro versions require a new user to be aware of the issue before they start implementing main(). That initial cognitive overhead is the main thing we need to address. I didn't know this, but I just checked and discovered that main can be called like any other function. It seems to me that you should never do this. But since it is just a function, I'm a little concerned with a proposal that lets its return type be omitted unlike every other function. main We could call it from above like some parameterized main: F where F: FnOnce() -> T, T: Terminate, and both () and Fallible implement Terminate. You'd still have to explicitly write which return type you want. main: F where F: FnOnce() -> T, T: Terminate I don't think you understood, this is a valid Rust program: fn main() { } fn foo() { main() } If we allow users to return something other than () without providing a return type, as some of these proposals would have us, that could become a type error without the signature of main explicitly changing. I'm suggesting we still require an explicit signature, just with more possibilities. So fn main() {} would still work as today, and fn main() -> Fallible {} would be used when you want to use the ? operator. In the latter case, foo() has to update as well. fn main() {} fn main() -> Fallible {} foo() Sure. There are a lot of proposals being thrown around, and I'm raising a concern about some of the proposals. An important point I didn't see anyone mention yet is what should actually happen at runtime when a program "returns an error from main" as opposed to panicking or returning (). I can think of two options: 1) Do what panicking in main does today.2) Print the error value we got, then exit normally. I am strongly in favor of not doing #1, because it would put a giant asterisk on the claim that "? and Result are better than .unwrap() and panics" if ? does the same thing as panic in the simplest possible case. Plus, as useful as backtraces are in the general case, a real error value designed by other humans is likely to be far more useful to the person who needs to debug whatever just happened. As for the details of how this should work, if we go back to how Niko framed the problem... I see two basic approaches to solving it: Allow main() to return more kinds of things (in particular, results). Change the ? means when it is used in main(). I see two basic approaches to solving it: As described in Niko's original post, these seem not so much different approaches as extreme ends of a spectrum, and I believe the optimal solution is probably somewhere in the middle. The "Fallible" suggestion made above, at least the way I interpreted it, is exactly what I'd want. The main() function does change signature, but only in the simplest possible way (there is exactly one new signature, only one type in that signature, no sigils besides "->", and no invisible/magic bits). The ? semantics also change slightly since it has to convert to this "Fallible" type instead, but conceptually it's still doing the same thing. The possibility that "Fallible" is just one concrete type with no parameters or bounds or impl keyword seems like a really nice touch, since this is supposed to help novices. impl Of course, all of that assumes it's really feasible for all "sane" error types that one might try to use ? on in main() to be implicitly converted to a single Fallible type that can then be used to print all the information provided by that error. I think it is, because "implements the Error trait" seems like a good definition of "sane error type", and the conversion to Fallible could just be calling description() and cause() repeatedly to make a big String. Maybe the Fallible type is nothing more than a String with a special print method?
https://internals.rust-lang.org/t/rfc-mentoring-opportunity-permit-in-main/4600?page=2
CC-MAIN-2017-09
refinedweb
1,541
61.77
Code Quality Comparison of Firebird, MySQL, and PostgreSQL Code Quality Comparison of Firebird, MySQL, and PostgreSQL Get a comparison of three major projects at once — Firebird, MySQL, and PostgreSQL — in terms of interesting bugs and high code quality. Join the DZone community and get the full member experience.Join For Free Today's article is somewhat unusual, if only because instead of reviewing one project, we'll be comparing three projects at once, looking for the one with the most interesting bugs and — which is of particular interest — the one with the highest code quality. The projects we are going to review are Firebird, MySQL, and PostgreSQL. So let's get started! A Few Words About the Projects Firebird Firebird (FirebirdSQL). Additional information: - Official website - GitHub repository - Stars on GitHub: 133 - Forks on GitHub: 51 MySQL MySQL is an open-source relational database management system (RDBMS). MySQL is typically used as a server for local and remote clients, but the distribution also includes an embedded MySQL server library, which makes it possible to run a MySQL server inside a client application. MySQL supports multiple table types, which makes it a very flexible tool: users can choose between MyISAM tables, which support full-text search, and InnoDB tables, which support transactions at the level of individual records. MySQL also comes with a special table type called EXAMPLE, which is used to demonstrate the principles of creating new table types. Thanks to the open architecture and GPL-licensing, new types are regularly added to MySQL. Additional information: - Official website - GitHub repository - Stars on GitHub: 2179 - Forks on GitHub: 907 PostgreSQL PostgreSQL is an object-relational database management system (ORDBMS). developed by the PostgreSQL Global Development Group, a diverse group of many companies and individual contributors. It is free and open-source, released under the terms of the PostgreSQL License, a permissive software license. Additional information: - Official website - GitHub repository mirror - Stars on GitHub: 3260 - Forks on GitHub: 1107 PVS-Studio I was using static code analyzer PVS-Studio to detect bugs. PVS-Studio is an analyzer for source code written in C, C++, and C# which helps reduce software development costs due to early detection of bugs, defects, and security issues in programs' source code. It runs on Windows and Linux. Download links: Because each of the three projects is fairly easy to build and includes.sln files (either available right from the start or generated through CMake), the analysis itself becomes quite a trivial task: you just need to start a check in the PVS-Studio plugin for Visual Studio. Comparison Criteria Before starting our discussion, we have to decide what comparison criteria to use. This is one of the primary concerns of this article. Why "Head-On" Comparison Is Not a Good Idea "Head-on" comparison based on the number of error messages produced by the analyzer (or rather the number of messages/number of LOC ratio) for each project is not a good idea, even though it's the least costly way. Why so? Take PostgreSQL project, for instance. It triggers 611 high-certainty-level GA warnings, but if you filter these warnings by the code of PVS-Studio diagnostic rule (V547) and by the part of the message ret < 0, you'll see that there are 419 warnings! That's too many, isn't it? It seems all these messages come from a single source such as a macro or automatically generated code. Well, the comments in the beginning of the files, at which the warnings were issued, prove that our assumption is correct: /* This file was generated automatically by the Snowball to ANSI C compiler */ Now that you know that the code was generated automatically, you have two options: - Suppress all these warnings at the generated code as they are not interesting. This cuts the total number of messages (GA, Lvl1) by as much as 69%! - Accept that bugs in automatically generated code are still bugs and try to do something about them (say, fix the code-generating script). In this case, the number of messages remains the same. Another problem is errors found in third-party components used in the projects. Again, you have to choose between the same two options: - Pretend these bugs are no concern of yours — but will the users agree with that? - Take on the responsibility for these bugs. These are just a couple of examples of how you have to make a choice that may affect (sometimes drastically) the number of warnings to deal with. An Alternative Way Let's agree right off to leave out messages of the 3 (low-certainty) level. These issues are not the ones that worth to be paid attention in the first place. Sure, some of them might be interesting, but it's better to ignore them when you write articles and when you are only getting started with static analysis. This review is not a full-fledged comparison, as such a comparison would be too tedious for many reasons. For one thing, it would require preliminary configuration of the analyzer for each of the projects, as well as looking through and examining hundreds of messages after the check. It all takes too much time, while there's doubt if such an undertaking is really worth it. Instead, I will look through the logs for each of the projects, pick the most interesting bugs, comment on them, and check the other two projects for similar issues. There's one more thing I should mention. We've started to pay attention to security issues lately and even posted an article titled How Can PVS-Studio Help in the Detection of Vulnerabilities? Since one of today's participants, MySQL, had been mentioned in that article, I was curious to see if PVS-Studio would detect any of those specific code patterns. No gimmicks — we'll just additionally look for warnings similar to those discussed in the article above. So, again, I'll be evaluating the code quality based on the following criteria: - First, I will scan each of the three logs for the same warnings as discussed in the above-mentioned article on security issues. The idea is simple: if you know that a certain code pattern could be a vulnerability (even though not all the time), then you should take a closer look at it. - Then I will look through the GA warnings of the first two certainty levels, pick the most interesting ones, and check if the other projects have triggered similar warnings. As we proceed, I'll be giving demerit points to each project, so the one with the fewest points will be the winner (within the restrictions discussed earlier). There are some specific details, of course, but I'll be commenting on these along the way and at the end of the article. Here we go! Review of Bugs Total Analysis Results The table below shows the total analysis results "as is", i.e. with no false-positives suppressed, without any filtering by folders, and so on. Note that the warnings refer only to the General Analysis set. This table, however, is a poor basis for drawing any conclusions about the code quality. As I already said, there is a number of reasons: - No analyzer preliminary configuration. - No false-positive suppression. - Different sizes of the codebases. - We were making changes to the analyzer while working on this article, so the "before" and "after" results may be slightly different. As for the density of warnings (not bugs!), i.e. the ratio between the number of messages and LOC, as measured without preliminary configuration, it is roughly the same for Firebird and PostgreSQL and is a bit higher for MySQL. But let's not jump to conclusions because, you know, the devil is in the detail. Troubles With Clearing Private Data The V597 diagnostic is issued by a presence of such a call of memset function, performing data clearing, which can be removed by a compiler when optimization. As a result, private data might remain uncleared. For details, see the documentation on the diagnostic. Neither Firebird, nor PostgreSQL triggered any messages of this type, but MySQL did. So, it is MySQL that the following example is taken from: extern "C" char * my_crypt_genhash(char *ctbuffer, size_t ctbufflen, const char *plaintext, size_t plaintext_len, const char *switchsalt, const char **params) { int salt_len; size_t i; char *salt; unsigned char A[DIGEST_LEN]; unsigned char B[DIGEST_LEN]; unsigned char DP[DIGEST_LEN]; unsigned char DS[DIGEST_LEN]; .... (void) memset(A, 0, sizeof (A)); (void) memset(B, 0, sizeof (B)); (void) memset(DP, 0, sizeof (DP)); (void) memset(DS, 0, sizeof (DS)); return (ctbuffer); } PVS-Studio warnings: - V597 The compiler could delete the 'memset' function call, which is used to flush 'A' buffer. The RtlSecureZeroMemory() function should be used to erase the private data. crypt_genhash_impl.cc 420 - V597 The compiler could delete the 'memset' function call, which is used to flush 'B' buffer. The RtlSecureZeroMemory() function should be used to erase the private data. crypt_genhash_impl.cc 421 - V597 The compiler could delete the 'memset' function call, which is used to flush 'DP' buffer. The RtlSecureZeroMemory() function should be used to erase the private data. crypt_genhash_impl.cc 422 - V597 The compiler could delete the 'memset' function call, which is used to flush 'DS' buffer. The RtlSecureZeroMemory() function should be used to erase the private data. crypt_genhash_impl.cc 423 The analyzer detected a function with as many as four buffers (!), which must be forcibly cleared. However, the function could fail to do so, causing the data to remain in memory "as-is." Since buffers A, B, DP, and DS are not used later on, the compiler is allowed to remove the call to the memset function because such an optimization does not affect the program's behavior from the viewpoint of the C/C++ language. For more information about this issue, see the article Safe Clearing of Private Data. The rest messages are not any different, so I'll just list them: - V597 The compiler could delete the 'memset' function call, which is used to flush 'table_list' object. The RtlSecureZeroMemory() function should be used to erase the private data. sql_show.cc 630 - V597 The compiler could delete the 'memset' function call, which is used to flush 'W' buffer. The RtlSecureZeroMemory() function should be used to erase the private data. sha.cpp 413 - V597 The compiler could delete the 'memset' function call, which is used to flush 'W' buffer. The RtlSecureZeroMemory() function should be used to erase the private data. sha.cpp 490 - V597 The compiler could delete the 'memset' function call, which is used to flush 'T' buffer. The RtlSecureZeroMemory() function should be used to erase the private data. sha.cpp 491 - V597 The compiler could delete the 'memset' function call, which is used to flush 'W' buffer. The RtlSecureZeroMemory() function should be used to erase the private data. sha.cpp 597 - V597 The compiler could delete the 'memset' function call, which is used to flush 'T' buffer. The RtlSecureZeroMemory() function should be used to erase the private data. sha.cpp 598 Here's a more interesting case. void win32_dealloc(struct event_base *_base, void *arg) { struct win32op *win32op = arg; .... memset(win32op, 0, sizeof(win32op)); free(win32op); } PVS-Studio warning: V597: The compiler could delete the 'memset' function call, which is used to flush 'win32op' object. The RtlSecureZeroMemory() function should be used to erase the private data. win32.c 442 It is similar to the previous example except that after the memory block is cleared the pointer will be passed to the free function. But even then the compiler is still allowed to remove the call to memset, leaving only the call to free (which clears the memory block). As a result, the data that was to be cleared remains in memory. For more information, see the above-mentioned article. Assigning demerit points. This is quite a serious error - even more so because there are three instances of it. 3 demerit points go to MySQL. No Check for the Pointer Returned by Malloc and Other Similar Functions All the three projects triggered V769 warnings. - Firebird: high certainty - 0; medium certainty - 0; low certainty - 9 - MySQL: high certainty - 0; medium certainty - 13; low certainty - 103 - PostgreSQL: high certainty - 1 medium certainty - 2; low certainty - 24 Since we agreed to ignore third-level warnings, we continue without Firebird (so much the better for it). All the three warnings in PostgreSQL proved irrelevant too. This leaves only MySQL: it also triggered a few false positives, but some of the warnings are worth looking at. bool Gcs_message_stage_lz4::apply(Gcs_packet &packet) { .... unsigned char *new_buffer = (unsigned char*) malloc(new_capacity); unsigned char *new_payload_ptr = new_buffer + fixed_header_len + hd_len; // compress payload compressed_len= LZ4_compress_default((const char*)packet.get_payload(), (char*)new_payload_ptr, static_cast<int>(old_payload_len), compress_bound); .... } PVS-Studio warning: V769: The 'new_buffer' pointer in the 'new_buffer + fixed_header_len' expression could be nullptr. In such case, resulting value will be senseless and it should not be used. Check lines: 74, 73. gcs_message_stage_lz4.cc 74 If it fails to allocate the requested memory block, the malloc function returns a null pointer that could be stored to the new_buffer variable. Next, as the new_payload_ptr variable is initialized, the value of the new_buffer pointer is added to the values of variables fixed_header_len and hd_len. This is a point of no return for new_payload_ptr: if later on (say, in another function) we decide to check it for NULL, such a check won't help. No need to tell you what the implications are. So, it would be wiser to make sure that new_buffer is non-null before initializing new_payload_ptr. You may argue that since malloc has failed to allocate the requested memory block, then there's not much sense in checking its return value for NULL either. The application can't continue its normal work anyway, so why not let it crash the next time it uses the pointer? Since quite a lot of developers stick to this approach, it can be called legitimate - but is this approach right? After all, you could try to somehow handle that case to save the data or have the application crash in a "softer way". Besides, this approach might lead to security issues because if the application happens to handle another memory block (null pointer + value) rather than the null pointer itself, it may well damage some data. All this makes your program even more vulnerable. Are you sure you want it that way? Anyway, you have to decide for yourself what the pros and cons are and which choice is right. I recommend the second approach — the V769 diagnostic will help you detect those issues. However, if you are sure that such functions can never return NULL, tell the analyzer about it so you don't get the same warnings again. See the article Additional Diagnostics Configuration to find out how. Assigning demerit points: Considering everything said above, MySQL is given 1 demerit point. The Use of a Potential Null Pointer Warnings of this type (diagnostic V575) were found in each of the three projects. This is an example from Firebird (medium certainty): static void write_log(int log_action, const char* buff) { .... log_info* tmp = static_cast<log_info*>(malloc(sizeof(log_info))); memset(tmp, 0, sizeof(log_info)); .... } PVS-Studio warning: V575: The potential null pointer is passed into 'memset' function. Inspect the first argument. Check lines: 1106, 1105. iscguard.cpp 1106 This defect is similar to the previous one — no check for the return value of the malloc function. If it fails to allocate the requested block of memory, malloc will return a null pointer, which will then be passed to the memset function. Here is a similar example from MySQL: Xcom_member_state::Xcom_member_state(....) { .... m_data_size= data_size; m_data= static_cast<uchar *>(malloc(sizeof(uchar) * m_data_size)); memcpy(m_data, data, m_data_size); .... } PVS-Studio warning: V575: The potential null pointer is passed into 'memcpy' function. Inspect the first argument. Check lines: 43, 42. gcs_xcom_state_exchange.cc 43 This is similar to what we saw in Firebird. Just to make it clear, there are some fragments of code where the returned value malloc is checked for inequality to null. The following is a similar fragment from PostgreSQL: static void ecpg_filter(const char *sourcefile, const char *outfile) { .... n = (char *) malloc(plen); StrNCpy(n, p + 1, plen); .... } PVS-Studio warning: V575 The potential null pointer is passed into 'strncpy' function. Inspect the first argument. Check lines: 66, 65. pg_regress_ecpg.c 66 MySQL and PostgreSQL, however, triggered a few high-certainty-level warnings, which are of more interest. An example from MySQL: View_change_event::View_change_event(char* raw_view_id) : Binary_log_event(VIEW_CHANGE_EVENT), view_id(), seq_number(0), certification_info() { memcpy(view_id, raw_view_id, strlen(raw_view_id)); } PVS-Studio warning: V575 The 'memcpy' function doesn't copy the whole string. Use 'strcpy / strcpy_s' function to preserve terminal null. control_events.cpp 830 The memcpy function is used to copy the string from raw_view_id to view_id; the number of bytes to copy is calculated using the strlen function. The problem here is that strlen ignores the terminating null character, so the string is copied without it. If you then don't add it by hand, other string functions will not be able to handle view_id properly. To ensure correct copying of the string, use strcpy / strcpy_s. Now, the following fragment from PostgreSQL looks very much the same: static int PerformRadiusTransaction(char *server, char *secret, char *portstr, char *identifier, char *user_name, char *passwd) { .... uint8 *cryptvector; .... cryptvector = palloc(strlen(secret) + RADIUS_VECTOR_LENGTH); memcpy(cryptvector, secret, strlen(secret)); } PVS-Studio warning: V575: The 'memcpy' function doesn't copy the whole string. Use 'strcpy / strcpy_s' function to preserve terminal null. auth.c 2956 There is, however, an interesting difference from the previous example. The cryptvector variable is of type uint8*. While uint8 is an alias for unsigned char, the programmer seems to be using it to explicitly indicate that these data is not meant to be handled as a string; so, given the context, this operation is valid and is not as suspicious as the previous case. Some of the reported fragments, however, don't look that safe. int intoasc(interval * i, char *str) { char *tmp; errno = 0; tmp = PGTYPESinterval_to_asc(i); if (!tmp) return -errno; memcpy(str, tmp, strlen(tmp)); free(tmp); return 0; } PVS-Studio warning: V575: The 'memcpy' function doesn't copy the whole string. Use 'strcpy / strcpy_s' function to preserve terminal null. informix.c 677 This issue follows the same pattern but is more like the example from MySQL: it deals with string operations and copying of a string's contents (except for the terminating null character) to memory used outside the function... Assigning demerit points: 1 demerit point goes to Firebird and 3 demerit points go to PostgreSQL and MySQL each (one point for a medium-certainty warning, two points for a high-certainty one). Potentially Unsafe Use of Formatted-Output Functions Only Firebird triggered a few V618 warnings. Take a look at this example: static const char* const USAGE_COMP = " USAGE IS COMP"; static void gen_based( const act* action) { .... fprintf(gpreGlob.out_file, USAGE_COMP); .... } PVS-Studio warning: V618: It's dangerous to call the 'fprintf' function in such a manner, as the line being passed could contain format specification. The example of the safe code: printf("%s", str); cob.cpp 1020 What alerted the analyzer is the fact that formatted-output function fprintf is used, while the string is written directly, without using the format string and related specifiers. This may be dangerous and even cause a security issue (see CVE-2013-4258) if the input string happens to contain format specifiers. In this case, though, the USAGE_COMP string is explicitly defined in the source code and doesn't include any format specifiers, so fprintf can be used safely here. The same applies to the rest cases: the input strings are hard-coded and have no format specifiers. Assigning demerit points. Considering all said above, I'm not giving any demerit points to Firebird. Other Warnings Mentioned in the Article on Vulnerabilities None of the projects triggered any V642 and V640 warnings — they all did well. Suspicious Use of Enumeration Elements An example from MySQL: enum wkbType { wkb_invalid_type= 0, wkb_first= 1, wkb_point= 1, wkb_linestring= 2, wkb_polygon= 3, wkb_multipoint= 4, wkb_multilinestring= 5, wkb_multipolygon= 6, wkb_geometrycollection= 7, wkb_polygon_inner_rings= 31, wkb_last=31 }; bool append_geometry(....) { .... if (header.wkb_type == Geometry::wkb_multipoint) .... else if (header.wkb_type == Geometry::wkb_multipolygon) .... else if (Geometry::wkb_multilinestring) .... else DBUG_ASSERT(false); .... } PVS-Studio warning: V768: The enumeration constant 'wkb_multilinestring' is used as a variable of a Boolean-type. item_geofunc.cc 1887 The message actually says it all. Two of the conditional expressions compare header.wkb_type with the elements of the Geometry enumeration, while the entire third expression is itself an enumerator. Since Geometry::wkb_multilinestring has the value 5, the body of the third conditional statement will execute every time the previous two checks fail. Therefore, the else-branch, containing the call to the DBUG_ASSERT macro, will never be executed. This suggests that the third conditional expression was meant to look like this: header.wkb_type == Geometry::wkb_multilinestring What about the rest? PostgreSQL didn't trigger any warnings of this type, while Firebird triggered as many as nine. Those, however, are all one level less critical (medium certainty), and the detected pattern is different too. The V768 diagnostic detects the following bug patterns: - High certainty: enumeration members are used as Boolean expressions. - Medium certainty: variables of enumeration type are used as Boolean expressions. While there is no excuse for first-level warnings, second-level ones leave room for debate. For example, this is what most cases look like: enum att_type { att_end = 0, .... }; void fix_exception(...., att_type& failed_attrib, ....) { .... if (!failed_attrib) .... } PVS-Studio warning: V768: The variable 'failed_attrib' is of enum type. It is odd that it is used as a variable of a Boolean-type. restore.cpp 8580 The analyzer finds it suspicious that the failed_attrib variable is checked for the value att_type::att_end in a way like that. If you ask me, I'd prefer an explicit comparison with the enumerator, yet I can't call this code incorrect. True, I don't like this style (and neither does the analyzer), but it's still legitimate. However, two fragments look more suspicious. Both have the same pattern, so we'll discuss only one of them. namespace EDS { .... enum TraScope {traAutonomous = 1, traCommon, traTwoPhase}; .... } class ExecStatementNode : .... { .... EDS::TraScope traScope; .... }; void ExecStatementNode::genBlr(DsqlCompilerScratch* dsqlScratch) { .... if (traScope) .... .... } PVS-Studio warning: V768: The variable 'traScope' is of enum type. It is odd that it is used as a variable of a Boolean-type. stmtnodes.cpp 3448 This example is similar to the previous one: the programmer is also checking that the value of the traScope variable is the same as the non-zero value of the enumerator member. However, unlike the previous example, there are no enumerator members with the value '0' here, which makes this code more suspicious. Now that we've started talking about medium-certainty warnings, I should add that ten such messages were issued for MySQL, as well. Assigning demerit points: Firebird is given 1 demerit point and MySQL is given 2 points. Incorrect Determination of Memory-Block Size Now, here's another interesting fragment of code. Note that we already saw it when discussing the problem with the clearing of private data. struct win32op { int fd_setsz; struct win_fd_set *readset_in; struct win_fd_set *writeset_in; struct win_fd_set *readset_out; struct win_fd_set *writeset_out; struct win_fd_set *exset_out; RB_HEAD(event_map, event_entry) event_root; unsigned signals_are_broken : 1; }; void win32_dealloc(struct event_base *_base, void *arg) { struct win32op *win32op = arg; .... memset(win32op, 0, sizeof(win32op)); free(win32op); } PVS-Studio warning: V579: The memset function receives the pointer and its size as arguments. It is possibly a mistake. Inspect the third argument. win32.c 442 Note the third argument in the call to the memset function. The sizeof operator returns the size of its argument in bytes, but here its argument is a pointer, so it returns the size of the pointer rather than the size of the structure. This will result in incomplete memory clearing even if the compiler won't throw away the call to memset. The moral is that you should choose variables' names carefully and try to avoid using similar-looking names. It's not always possible, so pay special attention to such cases. A lot of errors detected by diagnostic V501 in C/C++ projects and V3001 in C# projects stem from this variable-naming issue. No V579 warnings were issued for the other two projects. Assigning demerit points: MySQL is given 2 points. Another similar bug was also found in MySQL. typedef char Error_message_buf[1024]; const char* get_last_error_message(Error_message_buf buf) { int error= GetLastError(); buf[0]= '\0'; FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM, NULL, error, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), (LPTSTR)buf, sizeof(buf), NULL ); return buf; } PVS-Studio warning: V511 The sizeof() operator returns size of the pointer, and not of the array, in 'sizeof (buf)' expression. common.cc 507 Error_message_buf is an alias for an array of 1024 elements of type char. There's one crucial thing to keep in mind: even if a function signature is written like this: const char* get_last_error_message(char buf[1024]) buf is still a pointer, while the array size is only a hint to the programmer. This means that the sizeof(buf) expression works with the pointer here, not the array. This results in passing an incorrect buffer size to the function — four or eight instead of 1,024. Again, no warnings of this type in Firebird and PostgreSQL. Assigning demerit points: MySQL is given 2 points. Missing 'throw' Keyword Here's another interesting bug — this time in... MySQL again. It's a small fragment, so I'm giving it in full: The programmer creates an object of class std::runtime_error but doesn't use it in any way. They obviously meant to throw an exception but forgot to write the throw keyword. As a result, this case (active_connection == nullptr) can't be handled as expected. Neither Firebird, nor PostgreSQL triggered any warnings of this type. Assigning demerit points. 2 demerit points are given to MySQL. Calling the Wrong Memory-Deallocation Operator The following example is taken from Firebird. class Message { .... void createBuffer(Firebird::IMessageMetadata* aMeta) { unsigned l = aMeta->getMessageLength(&statusWrapper); check(&statusWrapper); buffer = new unsigned char[l]; } .... ~Message() { delete buffer; .... } ..... unsigned char* buffer; .... }; PVS-Studio warning: V611 The memory was allocated using 'new T[]' operator but was released using the 'delete' operator. Consider inspecting this code. It's probably better to use 'delete [] buffer;'. Check lines: 101, 237. message.h 101 Block of memory for the buffer (pointed to by the buffer pointer, a member of class Message) is allocated in a special method called createBuffer by using the new[] operator, in accordance with the standard. However, the class destructor deallocates the block of memory by using the delete operator instead of delete[]. No errors of this type were found in MySQL and PostgreSQL. Assigning demerit points: 2 demerit points go to Firebird. Summing It All Up Summing up the demerit points, we get the following: - Firebird: 1 + 1 + 2 = 4 points - MySQL: 3 + 1 + 2 + 2 + 2 + 2 = 12 points - PostgreSQL: 3 points Remember: The fewer points, the better. And if you ask me (a person with a wicked taste), I'd prefer... MySQL! It has the most interesting bugs and it's the leader, which makes it a perfect choice for analysis! Firebird and PostgreSQL are trickier. On the one hand, even a one-point margin counts; on the other hand, it's quite a small difference, especially because that point was given for a V768 warning of the medium-certainty level... But then again, the codebase of PostgreSQL is way larger, yet it issued four hundred warnings at its automatically generated code... Anyway, to figure out which of the two projects, Firebird or PostgreSQL, is better, we'd have to do a more thorough comparison. For now, I put them on one podium place so no one is offended. Maybe one day we'll compare them again more carefully, but it will be quite a different story... So, the code-quality rankings are as follows: - 1st place: Firebird and PostgreSQL. - 2nd place: MySQL. Please remember that any review or comparison, including this one, is subjective. Different approaches may produce different results (though it is mostly true for Firebird and PostgreSQL, not for MySQL). So what about static analysis? I hope you are convinced now that it is useful for detecting defects of various types. Want to find out if your codebase has any of those bugs? Then it's the right time to try PVS-Studio! You write perfectly clean code? Then why not check your colleagues' code? Published at DZone with permission of Sergey Vasiliev . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/code-quality-comparison-of-firebird-mysql-and-post-1?fromrel=true
CC-MAIN-2019-26
refinedweb
4,790
54.73
My Preferences The JavaFX 1.2 SDK provides many useful utility classes such as the Properties class used to access and store name/value pairs or the Storage class used to store the data locally on the client system. So far the JavaFX programming language does not support hash tables to store data of any type, but you can always use the Properties class for this purpose. For a start, let's create methods to put and get numbers. For example, public function put(key: String, value: Number) { put(key, value.toString()) } public function get(key: String, default: Number) { var value = get(key); if (value != null) try { return Number.valueOf(value) } catch (exception) { } default } Note that the put method converts a number into a string and the get method tries to convert a string into a number. For primitive data types, this is done rather easily. For more complex cases, you can develop your own implementation of the Converter interface and use it to convert an object into a string and vice versa. Let it be your homework exercise. Now add an ability to store properties automatically. I decided that at the moment of initialization, the class should read all properties from the storage and put them back when an application terminates. This task is very simple. var storage: Storage; postinit { storage = Storage { source: source } if (storage.resource.readable) { load(storage.resource.openInputStream()) } FX.addShutdownAction(store) } function store() { if (storage.resource.writable) { store(storage.resource.openOutputStream(true)) } } Consider an example of how to use the Preferences class. Create a simple application that stores the position and size of a window. Launch this application. Move the window, resize it and then close the application. Launch the same application again. You see that the window is located where you left it in the previous session. It is convenient, isn't it? class PersistentStage extends Stage { def preferences = Preferences { source: "bounds" } on replace { def s = Screen.primary.visualBounds; width = preferences.get("W", s.width / 2); height = preferences.get("H", s.height / 2); x = preferences.get("X", s.minX + (s.width - width) / 2); y = preferences.get("Y", s.minY + (s.height - height) / 2); } override var x on replace {preferences.put("X", x )} override var y on replace {preferences.put("Y", y )} override var width on replace {preferences.put("W", width )} override var height on replace {preferences.put("H", height)} } Later I will improve the Application class from the previous post by adding a support for properties storage. original post - Login or register to post comments - Printer-friendly version - malenkov's blog - 1817 reads
https://weblogs.java.net/blog/malenkov/archive/2009/06/05/my-preferences
CC-MAIN-2015-40
refinedweb
429
60.21
RPC::ExtDirect::Event - The way to pass data to client side use RPC::ExtDirect; use RPC::ExtDirect::Event; sub foo : ExtDirect( pollHandler ) { my ($class) = @_; # Do something good, collect results to $good_data my $good_data = { ... }; # Do something bad, collect results to $bad_data my $bad_data = [ ... ]; # Return the data return ( RPC::ExtDirect::Event->new('good', $good_data), RPC::ExtDirect::Event->new('bad', $bad_data ), ); } This module implements Event object that is used to return events or some kind of data from EventProvider handlers to the client side. Data can be anything that is serializable to JSON. No checks are made and it is assumed that client side can understand format of the data sent with Events. Note that by default JSON will blow up if you try to feed it a blessed object as data payload, and for very good reason: it is not obvious how to serialize a self-contained object. Each case requires specific handling which is not feasible in a framework like this; therefore no effort was made to support serialization of blessed objects. If you know that your object is nothing more than a hash containing simple scalar values and/or structures of scalar values, create a copy like this: my $hashref = {}; @$hashref{ keys %$object } = values %$object; But in reality, it almost always is not as simple as this. Creates a new Event object with event $name and some $data. Not intended to be called directly, provided for duck type compatibility with Exceptions and Request. Returns Event hashref in format supported by Ext.Direct client stack. Not intended to be called directly. There are no known bugs in this module. Alexander Tokarev <tokarev@cpan.org> This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See perlartistic.
http://search.cpan.org/~tokarev/RPC-ExtDirect-2.12/lib/RPC/ExtDirect/Event.pm
CC-MAIN-2014-41
refinedweb
296
62.38
This is a bit of a tangent but for some crazy reason, I wanted to convert some text to audio so I could listen to it while I drive. A quick Google search left me without any freeware that could handle the 53 page document–there are some cool websites that do text to mp3 like vozme and YAKiToMe! but they didn’t convert the whole document. I then found pyTTS, a python package that serves as a wrapper to the Microsoft Speech API (SAPI) , which has been in version 5 since 2000. But I didn’t easily find a version of pyTTS for python 2.6. So I decided to see if I could roll my own. As it turns out, getting python to talk using SAPI is relatively easy. Reading a plain text file can be done in a few lines. from comtypes.client import CreateObject infile = "c:/temp/text.txt" engine = CreateObject("SAPI.SpVoice") f = open(infile, 'r') theText = f.read() f.close() engine.speak(theText) And it wasn’t that much more to have it write out a .wav file: from comtypes.client import CreateObject engine = CreateObject("SAPI.SpVoice") stream = CreateObject("SAPI.SpFileStream") infile = "c:/temp/text.txt" outfile = "c:/temp/text4.wav" stream.Open(outfile, SpeechLib.SSFMCreateForWrite) engine.AudioOutputStream = stream f = open(infile, 'r') theText = f.read() f.close() engine.speak(theText) stream.Close() And with that chunk of code, I was able to convert my 54 page document into a 4 hour long .wav file (over 600 MB) that I used another software package to convert to .mp3 (200 MB). The voice is a bit robotic but not too bad, I just hope the content that I converted (a database specification standard) doesn’t put me to sleep while I drive.
http://milesgis.com/tag/sapi/
CC-MAIN-2017-51
refinedweb
297
76.42
Rating: 2 Article information Article relates to RadControls for Silverlight Created by Kiril Stanoev Last modified August, 25, 2008 Last modified by Note: In this particular project the test page is generated dynamically, but you are free to attach a website. 2. Once the project is loaded, add a reference to the Telerik.Windows.Controls.dll 3. Open Page.xaml and add an xmlns referencing the previously added dll. 1: xmlns:telerik="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls" Build the project and you are ready to open Page.xaml in Expression Blend. 4. When Blend loads, click the “Asset Library” button, choose the “Custom Controls” tab and then select RadSlider from the list. 5. Add RadSlider to the page (can be done in two ways): 5.1 Double-click the icon to insert RadSlider 5.1 Double-click the icon to insert RadSlider 5.2 Switch to XAML view and add RadSlider declaratively. 5.2 Switch to XAML view and add RadSlider declaratively. 1: <Grid x: 2: 3: <telerik:RadSlider /> 4: 5: </Grid> Currently Blend is not able to find the ControlTemplate of the control, so if you decide to right click and select “Edit Control Parts (Template)” > “Edit a Copy”, you will get a ControlTemplate with nothing inside. Therefore you need to manually insert the theme in the resources of the page. 6. From the zip archive, open Slider.xaml, copy everything that is inside the UserControl.Resources and paste it in the Resources of your UserControl. 7. RadSlider’s ControlTemplate uses the Visual State Manager, therefore you need to add a reference to the System.Windows.dll in order to use it. 1: xmlns:vsm="clr-namespace:System.Windows;assembly=System.Windows" We are approaching the finish line. Now you need to apply the style that you added in the resources to the slider. 8. Expand the style that targets RadSlider and add a key to it as shown in the screen shot bellow. 9. Apply the style to the slider, again can be done in two ways. 9.1 Apply the style using Blend 9.1 Apply the style using Blend 9.2 Declaratively 9.2 Declaratively 3: <telerik:RadSlider 10. With all the steps completed so far, you can easily edit the template of the slider. Source code for this tutorial: StylingSlider Additional Resources: Articles on Visual State Manager Using ControlTemplates Any comments and suggestions would be greatly appreciated. This is a nice article, and thank you. There are updates I can give such as "Blend 3.5 June 2008 Preview" is now "Microsoft Expression Blend 2 SP1" at... Also, Silverlight 2 is now currently in it's final stage.
http://www.telerik.com/support/kb/silverlight/general/how-to-edit-radcontrol-s-controltemplate-in-expression-blend.aspx
crawl-003
refinedweb
447
67.04
| ++ > STL STL - Page 3 41-60 of 87 Bool Values by DevX Pro How do I make 1 bool take 1 bit instead of 1 byte? The binary_search Algorithm by Danny Kalev STL's binary_search() algorithm traverses a sequence and returns a Boolean value indicating whether the sought-after element exists in that sequence. binary_search() is declared in the header ... Useful STL Terminology by Danny Kalev Here are some key terms that you may find useful for reading Standard Template Library (STL) literature and documentation. The unique() Algorithm by Danny Kalev STL's unique() algorithm eliminates all but the first element from every consecutive group of equal elements in a sequence of elements. unique() takes two forward iterators, the first of which marks ... STL Thread Safety by DevX Pro Does STL use any synchronization, or do I have to implement it myself by critical section? The reverse() Algorithm by Danny Kalev Another useful STL algorithm is reverse(). This algorithm reverses the order of elements in a specified sequence. reverse() takes two iterators that mark the sequence's beginning and end, ... Merging Two Lists by Danny Kalev Merging two lists isn't only a popular homework assignment; rather, this task is sometimes needed in real world programming as well. Fortunately, you don't have to reinvent the wheel anymore. STL's ... The replace() Algorithm by Danny Kalev Another useful STL algorithm is replace(), which is defined in the standard header <algorithm> and has the following ... Volatile Semantics and Container Objects by Danny Kalev Volatile objects are used in multithreaded applications and applications that map hardware devices into registers. Although you can declare an STL container object with the volatile qualifier, you ... The Possible Deprecation of vector<bool> and Its Consequences by Danny Kalev In the early days of STL, C++ creators decided to include a specialized form of the vector container class, namely vector < bool >. In this specialized vector, bits serve as the container's ... Iterators Aren't Pointers by Danny Kalev < ... Accessing Arrays by DevX Pro Normally, you access an array by the number of the element you want. Is it possible, in C++, to access an element by the value and return the number? Why you shouldn't store auto_ptr objects in STL containers by Danny Kalev Most C++ users already know that they shouldn't use auto_ptr objects as elements of STL containers. However, fewer users know exactly why this is so. ... Efficiency of STL or Quality of Implementation? by Danny Kalev I often hear people ask whether it's possible to write code that is quicker than STL. ... Inserting Different Types of Objects in a STL List by DevX Pro I would like to know if it is possible to insert objects of different types in the same STL list.? STL and User-Defined Classes by DevX Pro I'm new to the Standard Template Library. How do I incorporate a user-defined class into a hash_map? What functions do I need to overload? I get compiler errors when I try to insert into my hash_map for even the simplest of classes. Using the random_shuffle Algorithm by Danny Kalev STL includes the random_shuffle() algorithm. As the name suggests, this algorithm randomly shuffles the elements of a sequence. It takes two arguments, the first of which is an iterator that points ... Avoid Excessive use of Fully Qualified Names by Danny Kalev To some extent, the use of fully qualified names is the recommended way of referring to namespace members because it uniquely identifies and yet it avoids name conflicts. For ... Bit Classes by DevX Pro Are there power bit set classes around that can do what Verilog, VHDL can do so easily? a[3:2,0] = b[4:1] & c[5]; // collections of width n mixed with 1 Doing this bit by bit (wire by wire), follows the textbook but is also slow. I could "" everything and interpret, or write a full blown compiler for a tiny lang, but using C++/classes is much more expresive. Any suggestions? An Alternative STL Implementation by Danny Kalev The STLport organization offers an alternative implementation of the Standard Template Library. You can install the alternative instead of using the existing STL implementation shipped with your ... 41-60 of 87 Thanks for your registration, follow us on our social networks to keep up-to-date
http://www.devx.com/tips/cpp/stl/3
CC-MAIN-2016-18
refinedweb
723
53.41
Some or most of you have probably taken some undergraduate- or graduate-level statistics courses. Unfortunately, the curricula for most introductory statisics courses are mostly focused on conducting statistical hypothesis tests as the primary means for interest: t-tests, chi-squared tests, analysis of variance, etc. Such tests seek to esimate whether groups or effects are "statistically significant", a concept that is poorly understood, and hence often misused, by most practioners. Even when interpreted correctly, statistical significance is a questionable goal for statistical inference, as it is of limited utility. A far more powerful approach to statistical analysis involves building flexible models with the overarching aim of estimating quantities of interest. This section of the tutorial illustrates how to use Python to build statistical models of low to moderate difficulty from scratch, and use them to extract estimates and associated measures of uncertainty. %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt # Set some Pandas options pd.set_option('display.notebook_repr_html', False) pd.set_option('display.max_columns', 20) pd.set_option('display.max_rows', 25) An recurring statistical problem is finding estimates of the relevant parameters that correspond to the distribution that best represents our data. In parametric inference, we specify a priori a suitable distribution, then choose the parameters that best fit the data. x = np.array([ 1.00201077, 1.58251956, 0.94515919, 6.48778002, 1.47764604, 5.18847071, 4.21988095, 2.85971522, 3.40044437, 3.74907745, 1.18065796, 3.74748775, 3.27328568, 3.19374927, 8.0726155 , 0.90326139, 2.34460034, 2.14199217, 3.27446744, 3.58872357, 1.20611533, 2.16594393, 5.56610242, 4.66479977, 2.3573932 ]) _ = plt.hist(x, bins=8) We start with the problem of finding values for the parameters that provide the best fit between the model and the data, called point estimates. First, we need to define what we mean by ‘best fit’. There are two commonly used criteria: e.g. Poisson distribution The Poisson distribution models unbounded counts: $$E(X) = \text{Var}(X) = \lambda$$ e.g. normal distribution $$\begin{align}E(X) &= \mu \cr \text{Var}(X) &= \sigma^2 \end{align}$$ The dataset nashville_precip.txt contains NOAA precipitation data for Nashville measured since 1871. The gamma distribution is often a good fit to aggregated rainfall data, and will be our candidate distribution in this case. precip = pd.read_table("data/nashville_precip.txt", index_col=0, na_values='NA', delim_whitespace=True) precip.head() Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Year 1871 2.76 4.58 5.01 4.13 3.30 2.98 1.58 2.36 0.95 1.31 2.13 1.65 1872 2.32 2.11 3.14 5.91 3.09 5.17 6.10 1.65 4.50 1.58 2.25 2.38 1873 2.96 7.14 4.11 3.59 6.31 4.20 4.63 2.36 1.81 4.28 4.36 5.94 1874 5.22 9.23 5.36 11.84 1.49 2.87 2.65 3.52 3.12 2.63 6.12 4.19 1875 6.15 3.06 8.14 4.22 1.73 5.63 8.12 1.60 3.79 1.25 5.46 4.30 _ = precip.hist(sharex=True, sharey=True, grid=False) plt.tight_layout() The first step is recognixing what sort of distribution to fit our data to. A couple of observations: There are a few possible choices, but one suitable alternative is the gamma distribution: The method of moments simply assigns the empirical mean and variance to their theoretical counterparts, so that we can solve for the parameters. So, for the gamma distribution, the mean and variance are: So, if we solve for these parameters, we can use a gamma distribution to describe our data: Let's deal with the missing value in the October data. Given what we are trying to do, it is most sensible to fill in the missing value with the average of the available values. precip.fillna(value={'Oct': precip.Oct.mean()}, inplace=True) Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov \ Year 1871 2.76 4.58 5.01 4.13 3.30 2.98 1.58 2.36 0.95 1.31 2.13 1872 2.32 2.11 3.14 5.91 3.09 5.17 6.10 1.65 4.50 1.58 2.25 1873 2.96 7.14 4.11 3.59 6.31 4.20 4.63 2.36 1.81 4.28 4.36 1874 5.22 9.23 5.36 11.84 1.49 2.87 2.65 3.52 3.12 2.63 6.12 1875 6.15 3.06 8.14 4.22 1.73 5.63 8.12 1.60 3.79 1.25 5.46 1876 6.41 2.22 5.28 3.62 3.40 5.65 7.15 5.77 2.52 2.68 1.26 1877 4.05 1.06 4.98 9.47 1.25 6.02 3.25 4.16 5.40 2.61 4.93 1878 3.34 2.10 3.48 6.88 2.33 3.28 9.43 5.02 1.28 2.17 3.20 1879 6.32 3.13 3.81 2.88 2.88 2.50 8.47 4.62 5.18 2.90 5.85 1880 3.74 12.37 8.16 5.26 4.13 3.97 5.69 2.22 5.39 7.24 5.77 1881 3.54 5.48 2.79 5.12 3.67 3.70 0.86 1.81 6.57 4.80 4.89 1882 14.51 8.61 9.38 3.59 7.38 2.54 4.06 5.54 1.61 1.11 3.60 ... ... ... ... ... ... ... ... ... ... ... ... 2000 3.52 3.75 3.34 6.23 7.66 1.74 2.25 1.95 1.90 0.26 6.39 2001 3.21 8.54 2.73 2.42 5.54 4.47 2.77 4.07 1.79 4.61 5.09 2002 4.93 1.99 9.40 4.31 3.98 3.76 5.64 3.13 6.29 4.48 2.91 2003 1.59 8.47 2.30 4.69 10.73 7.08 2.87 3.88 8.70 1.80 4.17 2004 3.60 5.77 4.81 6.69 6.90 3.39 3.19 4.24 4.55 4.90 5.21 2005 4.42 3.84 3.90 6.93 1.03 2.70 2.39 6.89 1.44 0.02 3.29 2006 6.57 2.69 2.90 4.14 4.95 2.19 2.64 5.20 4.00 2.98 4.05 2007 3.32 1.84 2.26 2.75 3.30 2.37 1.47 1.38 1.99 4.95 6.20 2008 4.76 2.53 5.56 7.20 5.54 2.21 4.32 1.67 0.88 5.03 1.75 2009 4.59 2.85 2.92 4.13 8.45 4.53 6.03 2.14 11.08 6.49 0.67 2010 4.13 2.77 3.52 3.48 16.43 4.96 5.86 6.99 1.17 2.49 5.41 2011 2.31 5.54 4.59 7.51 4.38 5.04 3.46 1.78 6.20 0.93 6.15 Dec Year 1871 1.65 1872 2.38 1873 5.94 1874 4.19 1875 4.30 1876 0.95 1877 2.49 1878 6.04 1879 9.15 1880 3.32 1881 4.85 1882 1.52 ... ... 2000 3.44 2001 3.32 2002 5.81 2003 3.19 2004 5.93 2005 2.46 2006 3.41 2007 3.83 2008 6.72 2009 3.99 2010 1.87 2011 4.25 [141 rows x 12 columns] Now, let's calculate the sample moments of interest, the means and variances by month: precip_mean = precip.mean() precip_mean Jan 4.523688 Feb 4.097801 Mar 4.977589 Apr 4.204468 May 4.325674 Jun 3.873475 Jul 3.895461 Aug 3.367305 Sep 3.377660 Oct 2.610500 Nov 3.685887 Dec 4.176241 dtype: float64 precip_var = precip.var() precip_var Jan 6.928862 Feb 5.516660 Mar 5.365444 Apr 4.117096 May 5.306409 Jun 5.033206 Jul 3.777012 Aug 3.779876 Sep 4.940099 Oct 2.741659 Nov 3.679274 Dec 5.418022 dtype: float64 We then use these moments to estimate $\alpha$ and $\beta$ for each month: alpha_mom = precip_mean ** 2 / precip_var beta_mom = precip_var / precip_mean alpha_mom, beta_mom (Jan 2.953407 Feb 3.043866 Mar 4.617770 Apr 4.293694 May 3.526199 Jun 2.980965 Jul 4.017624 Aug 2.999766 Sep 2.309383 Oct 2.485616 Nov 3.692511 Dec 3.219070 dtype: float64, Jan 1.531684 Feb 1.346249 Mar 1.077920 Apr 0.979219 May 1.226724 Jun 1.299403 Jul 0.969593 Aug 1.122522 Sep 1.462581 Oct 1.050243 Nov 0.998206 Dec 1.297344 dtype: float64) We can use the gamma.pdf function in scipy.stats.distributions to plot the ditribtuions implied by the calculated alphas and betas. For example, here is January: from scipy.stats.distributions import gamma precip.Jan.hist(normed=True, bins=20) plt.plot(np.linspace(0, 10), gamma.pdf(np.linspace(0, 10), alpha_mom[0], beta_mom[0])) [<matplotlib.lines.Line2D at 0x105244050>] Looping over all months, we can create a grid of plots for the distribution of rainfall, using the gamma distribution: axs = precip.hist(normed=True, figsize=(12, 8), sharex=True, sharey=True, bins=15, grid=False) for ax in axs.ravel(): # Get month m = ax.get_title() # Plot fitted distribution x = np.linspace(*ax.get_xlim()) ax.plot(x, gamma.pdf(x, alpha_mom[m], beta_mom[m])) # Annotate with parameter estimates label = 'alpha = {0:.2f}\nbeta = {1:.2f}'.format(alpha_mom[m], beta_mom[m]) ax.annotate(label, xy=(10, 0.2)) plt.tight_layout() Maximum likelihood (ML) fitting is usually more work than the method of moments, but it is preferred as the resulting estimator is known to have good theoretical properties. There is a ton of theory regarding ML. We will restrict ourselves to the mechanics here. Say we have some data $y = y_1,y_2,\ldots,y_n$ that is distributed according to some distribution: Here, for example, is a Poisson distribution that describes the distribution of some discrete variables, typically counts: y = np.random.poisson(5, size=100) plt.hist(y, bins=12, normed=True) plt.xlabel('y'); plt.ylabel('Pr(y)') <matplotlib.text.Text at 0x1062f4510> The product $\prod_{i=1}^n Pr(y_i | \theta)$ gives us a measure of how likely it is to observe values $y_1,\ldots,y_n$ given the parameters $\theta$. Maximum likelihood fitting consists of choosing the appropriate function $l= Pr(Y|\theta)$ to maximize for a given set of observations. We call this function the likelihood function, because it is a measure of how likely the observations are if the model is true. Given these data, how likely is this model? In the above model, the data were drawn from a Poisson distribution with parameter $\lambda =5$. $$L(y|\lambda=5) = \frac{e^{-5} 5^y}{y!}$$ So, for any given value of $y$, we can calculate its likelihood: poisson_like = lambda x, lam: np.exp(-lam) * (lam**x) / (np.arange(x)+1).prod() lam = 6 value = 10 poisson_like(value, lam) 0.041303093412337726 np.sum(poisson_like(yi, lam) for yi in y) 11.338402687045475 lam = 8 np.sum(poisson_like(yi, lam) for yi in y) 7.6625623857972949 We can plot the likelihood function for any value of the parameter(s): lambdas = np.linspace(0,15) x = 5 plt.plot(lambdas, [poisson_like(x, l) for l in lambdas]) plt.xlabel('$\lambda$') plt.ylabel('L($\lambda$|x={0})'.format(x)) <matplotlib.text.Text at 0x106a2f190> How is the likelihood function different than the probability distribution function (PDF)? The likelihood is a function of the parameter(s) given the data, whereas the PDF returns the probability of data given a particular parameter value. Here is the PDF of the Poisson for $\lambda=5$. lam = 5 xvals = np.arange(15) plt.bar(xvals, [poisson_like(x, lam) for x in xvals]) plt.xlabel('x') plt.ylabel('Pr(X|$\lambda$=5)') <matplotlib.text.Text at 0x107124a10> Why are we interested in the likelihood function? A reasonable estimate of the true, unknown value for the parameter is one which maximizes the likelihood function. So, inference is reduced to an optimization problem. Going back to the rainfall data, if we are using a gamma distribution we need to maximize: $$\begin{align}l(\alpha,\beta) &= \sum_{i=1}^n \log[\beta^{\alpha} x^{\alpha-1} e^{-x/\beta}\Gamma(\alpha)^{-1}] \cr &= n[(\alpha-1)\overline{\log(x)} - \bar{x}\beta + \alpha\log(\beta) - \log\Gamma(\alpha)]\end{align}$$ (Its usually easier to work in the log scale) where $n = 2012 − 1871 = 141$ and the bar indicates an average over all i. We choose $\alpha$ and $\beta$ to maximize $l(\alpha,\beta)$. Notice $l$ is infinite if any $x$ is zero. We do not have any zeros, but we do have an NA value for one of the October data, which we dealt with above. To find the maximum of any function, we typically take the derivative with respect to the variable to be maximized, set it to zero and solve for that variable. $$\frac{\partial l(\alpha,\beta)}{\partial \beta} = n\left(\frac{\alpha}{\beta} - \bar{x}\right) = 0$$ Which can be solved as $\beta = \alpha/\bar{x}$. However, plugging this into the derivative with respect to $\alpha$ yields: $$\frac{\partial l(\alpha,\beta)}{\partial \alpha} = \log(\alpha) + \overline{\log(x)} - \log(\bar{x}) - \frac{\Gamma(\alpha)'}{\Gamma(\alpha)} = 0$$ This has no closed form solution. We must use numerical optimization! Numerical optimization alogarithms take an initial "guess" at the solution, and iteratively improve the guess until it gets "close enough" to the answer. Here, we will use Newton-Raphson algorithm: Which is available to us via SciPy: from scipy.optimize import newton Here is a graphical example of how Newtone-Raphson converges on a solution, using an arbitrary function: # some function func = lambda x: 3./(1 + 400*np.exp(-2*x)) - 1 xvals = np.linspace(0, 6) plt.plot(xvals, func(xvals)) plt.text(5.3, 2.1, '$f(x)$', fontsize=16) # zero line plt.plot([0,6], [0,0], 'k-') # value at step n plt.plot([4,4], [0,func(4)], 'k:') plt.text(4, -.2, '$x_n$', fontsize=16) # tangent line tanline = lambda x: -0.858 + 0.626*x plt.plot(xvals, tanline(xvals), 'r--') # point at step n+1 xprime = 0.858/0.626 plt.plot([xprime, xprime], [tanline(xprime), func(xprime)], 'k:') plt.text(xprime+.1, -.2, '$x_{n+1}$', fontsize=16) <matplotlib.text.Text at 0x107202750> To apply the Newton-Raphson algorithm, we need a function that returns a vector containing the first and second derivatives of the function with respect to the variable of interest. In our case, this is: from scipy.special import psi, polygamma dlgamma = lambda m, log_mean, mean_log: np.log(m) - psi(m) - log_mean + mean_log dl2gamma = lambda m, *args: 1./m - polygamma(1, m) where log_mean and mean_log are $\log{\bar{x}}$ and $\overline{\log(x)}$, respectively. psi and polygamma are complex functions of the Gamma function that result when you take first and second derivatives of that function. # Calculate statistics log_mean = precip.mean().apply(np.log) mean_log = precip.apply(np.log).mean() Time to optimize! # Alpha MLE for December alpha_mle = newton(dlgamma, 2, dl2gamma, args=(log_mean[-1], mean_log[-1])) alpha_mle 3.5189679152399647 And now plug this back into the solution for beta: beta_mle = alpha_mle/precip.mean()[-1] beta_mle 0.84261607548413797 We can compare the fit of the estimates derived from MLE to those from the method of moments: dec = precip.Dec dec.hist(normed=True, bins=10, grid=False) x = np.linspace(0, dec.max()) plt.plot(x, gamma.pdf(x, alpha_mom[-1], beta_mom[-1]), 'm-') plt.plot(x, gamma.pdf(x, alpha_mle, beta_mle), 'r--') [<matplotlib.lines.Line2D at 0x107115c90>] For some common distributions, SciPy includes methods for fitting via MLE: from scipy.stats import gamma gamma.fit(precip.Dec) (2.2427517753152308, 0.65494604470188622, 1.570073932063466) This fit is not directly comparable to our estimates, however, because SciPy's gamma.fit method fits an odd 3-parameter version of the gamma distribution. Suppose that we observe $Y$ truncated below at $a$ (where $a$ is known). If $X$ is the distribution of our observation, then: $$ P(X \le x) = P(Y \le x|Y \gt a) = \frac{P(a \lt Y \le x)}{P(Y \gt a)}$$ (so, $Y$ is the original variable and $X$ is the truncated variable) Then X has the density: $$f_X(x) = \frac{f_Y (x)}{1−F_Y (a)} \, \text{for} \, x \gt a$$ Suppose $Y \sim N(\mu, \sigma^2)$ and $x_1,\ldots,x_n$ are independent observations of $X$. We can use maximum likelihood to find $\mu$ and $\sigma$. First, we can simulate a truncated distribution using a while statement to eliminate samples that are outside the support of the truncated distribution. x = np.random.normal(size=10000) a = -1 x_small = x < a while x_small.sum(): x[x_small] = np.random.normal(size=x_small.sum()) x_small = x < a _ = plt.hist(x, bins=100) We can construct a log likelihood for this function using the conditional form: $$f_X(x) = \frac{f_Y (x)}{1−F_Y (a)} \, \text{for} \, x \gt a$$ from scipy.stats.distributions import norm trunc_norm = lambda theta, a, x: -(np.log(norm.pdf(x, theta[0], theta[1])) - np.log(1 - norm.cdf(a, theta[0], theta[1]))).sum() For this example, we will use another optimization algorithm, the Nelder-Mead simplex algorithm. It has a couple of advantages: SciPy implements this algorithm in its fmin function: from scipy.optimize import fmin fmin(trunc_norm, np.array([1,2]), args=(-1, x)) Optimization terminated successfully. Current function value: 11077.807912 Iterations: 44 Function evaluations: 82 array([ 0.02244612, 0.99710875]) In general, simulating data is a terrific way of testing your model before using it with real data. In some instances, we may not be interested in the parameters of a particular distribution of data, but just a smoothed representation of the data at hand. In this case, we can estimate the disribution non-parametrically (i.e. making no assumptions about the form of the underlying distribution) using kernel density estimation. # Some random data y = np.random.random(15) * 10 y array([ 0.62402604, 1.06204307, 7.27542973, 5.91978919, 9.95408103, 9.51695422, 9.54608936, 4.3288413 , 4.15275767, 2.47654605, 3.74949496, 4.9779126 , 0.68937206, 8.82221055, 2.87731832]) x = np.linspace(0, 10, 100) # Smoothing parameter s = 0.4 # Calculate the kernels kernels = np.transpose([norm.pdf(x, yi, s) for yi in y]) plt.plot(x, kernels, 'k:') plt.plot(x, kernels.sum(1)) plt.plot(y, np.zeros(len(y)), 'ro', ms=10) [<matplotlib.lines.Line2D at 0x106b3ed10>] SciPy implements a Gaussian KDE that automatically chooses an appropriate bandwidth. Let's create a bi-modal distribution of data that is not easily summarized by a parametric distribution: # Create a bi-modal distribution with a mixture of Normals. x1 = np.random.normal(0, 3, 50) x2 = np.random.normal(4, 1, 50) # Append by row x = np.r_[x1, x2] plt.hist(x, bins=8, normed=True) (array([ 0.00557232, 0.02228927, 0.04457855, 0.08358477, 0.0612955 , 0.10030173, 0.1838865 , 0.05572318]), array([-7.66367033, -5.86908508, -4.07449983, -2.27991459, -0.48532934, 1.30925591, 3.10384116, 4.89842641, 6.69301165]), <a list of 8 Patch objects>) from scipy.stats import kde density = kde.gaussian_kde(x) xgrid = np.linspace(x.min(), x.max(), 100) plt.hist(x, bins=8, normed=True) plt.plot(xgrid, density(xgrid), 'r-') [<matplotlib.lines.Line2D at 0x107360190>] Recall the cervical dystonia database, which is a clinical trial of botulinum toxin type B (BotB) for patients with cervical dystonia from nine U.S. sites. The response variable is measurements on the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment). One way to check the efficacy of the treatment is to compare the distribution of TWSTRS for control and treatment patients at the end of the study. Use the method of moments or MLE to calculate the mean and variance of TWSTRS at week 16 for one of the treatments and the control group. Assume that the distribution of the twstrs variable is normal: $$f(x \mid \mu, \sigma^2) = \sqrt{\frac{1}{2\pi\sigma^2}} \exp\left\{ -\frac{1}{2} \frac{(x-\mu)^2}{\sigma^2} \right\}$$ cdystonia = pd.read_csv("data/cdystonia.csv") cdystonia[cdystonia.obs==6].hist(column='twstrs', by=cdystonia.treat, bins=8) array([[<matplotlib.axes._subplots.AxesSubplot object at 0x1074a7710>, <matplotlib.axes._subplots.AxesSubplot object at 0x107439950>], [<matplotlib.axes._subplots.AxesSubplot object at 0x1073ffc90>, <matplotlib.axes._subplots.AxesSubplot object at 0x1075e8bd0>]], dtype=object) # Write your answer here A general, primary goal of many statistical data analysis tasks is to relate the influence of one variable on another. For example, we may wish to know how different medical interventions influence the incidence or duration of disease, or perhaps a how baseball player's performance varies as a function of age. x = np.array([2.2, 4.3, 5.1, 5.8, 6.4, 8.0]) y = np.array([0.4, 10.1, 14.0, 10.9, 15.4, 18.5]) plt.plot(x,y,'ro') [<matplotlib.lines.Line2D at 0x1076a07d0>] We can build a model to characterize the relationship between $X$ and $Y$, recognizing that additional factors other than $X$ (the ones we have measured or are interested in) may influence the response variable $Y$. where $f$ is some function, for example a linear function: and $\epsilon_i$ accounts for the difference between the observed response $y_i$ and its prediction from the model $\hat{y_i} = \beta_0 + \beta_1 x_i$. This is sometimes referred to as process uncertainty. We would like to select $\beta_0, \beta_1$ so that the difference between the predictions and the observations is zero, but this is not usually possible. Instead, we choose a reasonable criterion: the smallest sum of the squared differences between $\hat{y}$ and $y$. Squaring serves two purposes: (1) to prevent positive and negative values from cancelling each other out and (2) to strongly penalize large deviations. Whether the latter is a good thing or not depends on the goals of the analysis. In other words, we will select the parameters that minimize the squared error of the model. ss = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x) ** 2) ss([0,1],x,y) 333.35000000000002 b0,b1 = fmin(ss, [0,1], args=(x,y)) b0,b1 Optimization terminated successfully. Current function value: 21.375000 Iterations: 79 Function evaluations: 153 (-4.3500136038870876, 3.0000002915386412) plt.plot(x, y, 'ro') plt.plot([0,10], [b0, b0+b1*10]) [<matplotlib.lines.Line2D at 0x107671cd0>] plt.plot(x, y, 'ro') plt.plot([0,10], [b0, b0+b1*10]) for xi, yi in zip(x,y): plt.plot([xi]*2, [yi, b0+b1*xi], 'k:') plt.xlim(2, 9); plt.ylim(0, 20) (0, 20) Minimizing the sum of squares is not the only criterion we can use; it is just a very popular (and successful) one. For example, we can try to minimize the sum of absolute differences: sabs = lambda theta, x, y: np.sum(np.abs(y - theta[0] - theta[1]*x)) b0,b1 = fmin(sabs, [0,1], args=(x,y)) print b0,b1 plt.plot(x, y, 'ro') plt.plot([0,10], [b0, b0+b1*10]) Optimization terminated successfully. Current function value: 10.162463 Iterations: 39 Function evaluations: 77 0.00157170444494 2.31231743181 [<matplotlib.lines.Line2D at 0x1077cd890>] We are not restricted to a straight-line regression model; we can represent a curved relationship between our variables by introducing polynomial terms. For example, a cubic model: ss2 = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2)) ** 2) b0,b1,b2 = fmin(ss2, [1,1,-1], args=(x,y)) print b0,b1,b2 plt.plot(x, y, 'ro') xvals = np.linspace(0, 10, 100) plt.plot(xvals, b0 + b1*xvals + b2*(xvals**2)) Optimization terminated successfully. Current function value: 14.001110 Iterations: 198 Function evaluations: 372 -11.0748186039 6.0576975948 -0.302681057088 [<matplotlib.lines.Line2D at 0x10772f4d0>] Although polynomial model characterizes a nonlinear relationship, it is a linear problem in terms of estimation. That is, the regression model $f(y | x)$ is linear in the parameters. For some data, it may be reasonable to consider polynomials of order>2. For example, consider the relationship between the number of home runs a baseball player hits and the number of runs batted in (RBI) they accumulate; clearly, the relationship is positive, but we may not expect a linear relationship. ss3 = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2) - theta[3]*(x**3)) ** 2) bb = pd.read_csv("data/baseball.csv", index_col=0) plt.plot(bb.hr, bb.rbi, 'r.') b0,b1,b2,b3 = fmin(ss3, [0,1,-1,0], args=(bb.hr, bb.rbi)) xvals = np.arange(40) plt.plot(xvals, b0 + b1*xvals + b2*(xvals**2) + b3*(xvals**3)) Optimization terminated successfully. Current function value: 4274.128398 Iterations: 230 Function evaluations: 407 [<matplotlib.lines.Line2D at 0x107865190>] Of course, we need not fit least squares models by hand. The statsmodels package implements least squares models that allow for model fitting in a single line: import statsmodels.api as sm straight_line = sm.OLS(y, sm.add_constant(x)).fit() straight_line.summary() /usr/local/lib/python2.7/site-packages/statsmodels/stats/stattools.py:72: UserWarning: omni_normtest is not valid with less than 8 observations; 6 samples were given. "samples were given." % int(n)) from statsmodels.formula.api import ols as OLS data = pd.DataFrame(dict(x=x, y=y)) cubic_fit = OLS('y ~ x + I(x**2)', data).fit() cubic_fit.summary() # Write your answer here def calc_poly(params, data): x = np.c_[[data**i for i in range(len(params))]] return np.dot(params, x) ssp = lambda theta, x, y: np.sum((y - calc_poly(theta, x)) ** 2) betas = fmin(ssp, np.zeros(10), args=(x,y), maxiter=1e6) plt.plot(x, y, 'ro') xvals = np.linspace(0, max(x), 100) plt.plot(xvals, calc_poly(betas, xvals)) Optimization terminated successfully. Current function value: 7.015262 Iterations: 663 Function evaluations: 983 [<matplotlib.lines.Line2D at 0x1074ad290>] One approach is to use an information-theoretic criterion to select the most appropriate model. For example Akaike's Information Criterion (AIC) balances the fit of the model (in terms of the likelihood) with the number of parameters required to achieve that fit. We can easily calculate AIC as: $$AIC = n \log(\hat{\sigma}^2) + 2p$$ where $p$ is the number of parameters in the model and $\hat{\sigma}^2 = RSS/(n-p-1)$. Notice that as the number of parameters increase, the residual sum of squares goes down, but the second term (a penalty) increases. To apply AIC to model selection, we choose the model that has the lowest AIC value. n = len(x) aic = lambda rss, p, n: n * np.log(rss/(n-p-1)) + 2*p RSS1 = ss(fmin(ss, [0,1], args=(x,y)), x, y) RSS2 = ss2(fmin(ss2, [1,1,-1], args=(x,y)), x, y) print aic(RSS1, 2, n), aic(RSS2, 3, n) Optimization terminated successfully. Current function value: 21.375000 Iterations: 79 Function evaluations: 153 Optimization terminated successfully. Current function value: 14.001110 Iterations: 198 Function evaluations: 372 15.7816583572 17.6759368019 Hence, we would select the 2-parameter (linear) model. Fitting a line to the relationship between two variables using the least squares approach is sensible when the variable we are trying to predict is continuous, but what about when the data are dichotomous? Let's consider the problem of predicting survival in the Titanic disaster, based on our available information. For example, lets say that we want to predict survival as a function of the fare paid for the journey. titanic = pd.read_excel("data/titanic.xls", "titanic") titanic.name) 5 Anderson, Mr. Harry 6 Andrews, Miss. Kornelia Theodosia 7 Andrews, Mr. Thomas Jr 8 Appleton, Mrs. Edward Dale (Charlotte Lamson) 9 Artagaveytia, Mr. Ramon ... 1298 Wittevrongel, Mr. Camille 1299 Yasbeck, Mr. Antoni 1300 Yasbeck, Mrs. Antoni (Selini Alexander) 1301 Youseff, Mr. Gerious 1302 Yousif, Mr. Wazli 1303 Yousseff, Mr. Gerious 1304 Zabour, Miss. Hileni 1305 Zabour, Miss. Thamine 1306 Zakarian, Mr. Mapriededer 1307 Zakarian, Mr. Ortin 1308 Zimmerman, Mr. Leo Name: name, Length: 1309, dtype: object jitter = np.random.normal(scale=0.02, size=len(titanic)) plt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3) plt.yticks([0,1]) plt.ylabel("survived") plt.xlabel("log(fare)") <matplotlib.text.Text at 0x1082b5510> I have added random jitter on the y-axis to help visualize the density of the points, and have plotted fare on the log scale. Clearly, fitting a line through this data makes little sense, for several reasons. First, for most values of the predictor variable, the line would predict values that are not zero or one. Second, it would seem odd to choose least squares (or similar) as a criterion for selecting the best line. x = np.log(titanic.fare[titanic.fare>0]) y = titanic.survived[titanic.fare>0] betas_titanic = fmin(ss, [1,1], args=(x,y)) Optimization terminated successfully. Current function value: 277.621917 Iterations: 55 Function evaluations: 103 jitter = np.random.normal(scale=0.02, size=len(titanic)) plt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3) plt.yticks([0,1]) plt.ylabel("survived") plt.xlabel("log(fare)") plt.plot([0,7], [betas_titanic[0], betas_titanic[0] + betas_titanic[1]*7.]) [<matplotlib.lines.Line2D at 0x1083988d0>] If we look at this data, we can see that for most values of fare, there are some individuals that survived and some that did not. However, notice that the cloud of points is denser on the "survived" (y=1) side for larger values of fare than on the "died" (y=0) side. Rather than model the binary outcome explicitly, it makes sense instead to model the probability of death or survival in a stochastic model. Probabilities are measured on a continuous [0,1] scale, which may be more amenable for prediction using a regression line. We need to consider a different probability model for this exerciese however; let's consider the Bernoulli distribution as a generative model for our data: where $y = \{0,1\}$ and $p \in [0,1]$. So, this model predicts whether $y$ is zero or one as a function of the probability $p$. Notice that when $y=1$, the $1-p$ term disappears, and when $y=0$, the $p$ term disappears. So, the model we want to fit should look something like this: However, since $p$ is constrained to be between zero and one, it is easy to see where a linear (or polynomial) model might predict values outside of this range. We can modify this model sligtly by using a link function to transform the probability to have an unbounded range on a new scale. Specifically, we can use a logit transformation as our link function: Here's a plot of $p/(1-p)$ logit = lambda p: np.log(p/(1.-p)) unit_interval = np.linspace(0,1) plt.plot(unit_interval/(1-unit_interval), unit_interval) [<matplotlib.lines.Line2D at 0x108432650>] And here's the logit function: plt.plot(logit(unit_interval), unit_interval) [<matplotlib.lines.Line2D at 0x108464390>] The inverse of the logit transformation is: So, now our model is: We can fit this model using maximum likelihood. Our likelihood, again based on the Bernoulli model is: which, on the log scale is: We can easily implement this in Python, keeping in mind that fmin minimizes, rather than maximizes functions: invlogit = lambda x: 1. / (1 + np.exp(-x)) def logistic_like(theta, x, y): p = invlogit(theta[0] + theta[1] * x) # Return negative of log-likelihood return -np.sum(y * np.log(p) + (1-y) * np.log(1 - p)) Remove null values from variables x, y = titanic[titanic.fare.notnull()][['fare', 'survived']].values.T ... and fit the model. b0,b1 = fmin(logistic_like, [0.5,0], args=(x,y)) b0, b1 Optimization terminated successfully. Current function value: 827.015955 Iterations: 47 Function evaluations: 93 (-0.88238984528338194, 0.012452067664164127) jitter = np.random.normal(scale=0.01, size=len(x)) plt.plot(x, y+jitter, 'r.', alpha=0.3) plt.yticks([0,.25,.5,.75,1]) xvals = np.linspace(0, 600) plt.plot(xvals, invlogit(b0+b1*xvals)) [<matplotlib.lines.Line2D at 0x10865bc50>] As with our least squares model, we can easily fit logistic regression models in statsmodels, in this case using the GLM (generalized linear model) class with a binomial error distribution specified. logistic = sm.GLM(y, sm.add_constant(x), family=sm.families.Binomial()).fit() logistic.summary() # Write your answer here Parametric inference can be non-robust: Parmetric inference can be difficult: An alternative is to estimate the sampling distribution of a statistic empirically without making assumptions about the form of the population. We have seen this already with the kernel density estimate. The bootstrap is a resampling method discovered by Brad Efron that allows one to approximate the true sampling distribution of a dataset, and thereby obtain estimates of the mean and variance of the distribution. Bootstrap sample: $S_i^*$ is a sample of size $n$, with replacement. In Python, we have already seen the NumPy function permutation that can be used in conjunction with Pandas' take method to generate a random sample of some data without replacement: np.random.permutation(titanic.name)[:5] array([u'Meek, Mrs. Thomas (Annie Louise Rowley)', u'Thorneycroft, Mr. Percival', u'Williams, Mr. Leslie', u'Graham, Mr. George Edward', u'Petroff, Mr. Nedelio'], dtype=object) Similarly, we can use the random.randint method to generate a sample with replacement, which we can use when bootstrapping. random_ind = np.random.randint(0, len(titanic), 5) titanic.name[random_ind] 41 Brown, Mrs. James Joseph (Margaret Tobin) 1061 Nilsson, Miss. Helmina Josefina 937 Klasen, Miss. Gertrud Emilia 426 Hale, Mr. Reginald 831 Goodwin, Mr. Charles Frederick Name: name, dtype: object We regard S as an "estimate" of population P population : sample :: sample : bootstrap sample The idea is to generate replicate bootstrap samples: Compute statistic $t$ (estimate) for each bootstrap sample: n = 10 R = 1000 # Original sample (n=10) x = np.random.normal(size=n) # 1000 bootstrap samples of size 10 s = [x[np.random.randint(0,n,n)].mean() for i in range(R)] _ = plt.hist(s, bins=30) boot_mean = np.sum(s)/R boot_mean 0.087385394806476724 boot_var = ((np.array(s) - boot_mean) ** 2).sum() / (R-1) boot_var 0.10590407752057245 Since we have estimated the expectation of the bootstrapped statistics, we can estimate the bias of T: $$\hat{B}^* = \bar{T}^* - T$$ boot_mean - np.mean(x) -0.00528084680355842 An attractive feature of bootstrap statistics is the ease with which you can obtain an estimate of uncertainty for a given statistic. We simply use the empirical quantiles of the bootstrapped statistics to obtain percentiles corresponding to a confidence interval of interest. This employs the ordered bootstrap replicates: $$T_{(1)}^*, T_{(2)}^*, \ldots, T_{(R)}^*$$ Simply extract the $100(\alpha/2)$ and $100(1-\alpha/2)$ percentiles: $$T_{[(R+1)\alpha/2]}^* \lt \theta \lt T_{[(R+1)(1-\alpha/2)]}^*$$ s_sorted = np.sort(s) s_sorted[:10] array([-0.82890714, -0.77634577, -0.76588512, -0.76230089, -0.75578488, -0.73850118, -0.72869116, -0.72862786, -0.72840095, -0.71831374]) s_sorted[-10:] array([ 0.81823418, 0.86179331, 0.92314175, 0.93496722, 0.9358216 , 1.02058937, 1.03085586, 1.03121927, 1.22699691, 1.3599996 ]) alpha = 0.05 s_sorted[[(R+1)*alpha/2, (R+1)*(1-alpha/2)]] array([-0.5684053 , 0.68682205]) # Write your answer here
http://nbviewer.jupyter.org/gist/fonnesbeck/633f38dfb1d67b2e5fe9/4.%20Statistical%20Data%20Modeling.ipynb
CC-MAIN-2017-51
refinedweb
5,992
61.22
cfsetospeed - Set the output baud rate for a terminal #include <termios.h> int cfsetospeed( struct termios *termios_p, speed_t speed ); Standard C Library (libc) Interfaces documented on this reference page conform to industry standards as follows: cfsetospeed(): POSIX.1, XPG4, XPG4-UNIX Refer to the standards(5) reference page for more information about industry standards and associated tags. Points to a termios structure containing the output baud rate. Specifies the new output baud rate. The cfsetospeed() function sets the output baud rate stored in the structure pointed to by the termios_p parameter to the speed specified by the speed parameter. The zero baud rate, B0, is used to terminate the connection. If B0 is specified, the modem control lines are no longer asserted. Normally, this disconnects the line. There is no effect on the baud rates set in the hardware or on modem control lines until a subsequent successful call is made to the tcsetattr() function on the same termios structure. Upon successful completion, the cfsetospeed() function returns a value of 0 (zero). Otherwise, a value of -1 is returned. Functions: cfgetispeed(3), cfgetospeed(3), cfsetispeed(3), tcsetattr(3) Files: termios(4) Standards: standards(5) cfsetospeed(3)
https://nixdoc.net/man-pages/Tru64/man3/cfsetospeed.3.html
CC-MAIN-2020-29
refinedweb
196
57.06
NAMEvfork - create a child process and block parent SYNOPSIS#include <sys/types.h> #include <unistd.h> STANDARD DESCRIPTION(From XPG4 /. ERRORS - EAGAIN - Too many processes - try again. - ENOMEM - There is insufficient swap space for the new process. parent. HISTORIC DESCRIPTIONUnder Linux, fork() is implemented using copy-on-write pages, so the only penalty incurred by fork() is the time and memory required to duplicate the parent's page tables, and to create a unique task structure for the child. However, in the bad old days a fork() would require making a complete copy of the caller's data space, often needlessly, since usually immediately afterwards an exec() is done. Thus, for greater efficiency, BSD introduced the vfork system call, that did not fully copy the address space of the parent process, but borrowed the parent's. SEE ALSOclone(2), execve(2), fork(2), wait(2) Important: Use the man command (% man) to see how a command is used on your particular computer. >> Linux/Unix Command Library
http://linux.about.com/library/cmd/blcmdl2_vfork.htm
crawl-002
refinedweb
166
56.15
On Thu, Sep 06, 2007 at 01:12:19PM -0600, Eric Blake wrote: >-----BEGIN PGP SIGNED MESSAGE----- >Hash: SHA1 > >A mailing list is more appropriate for this than me personally - > > >According to Mike Parker on 9/6/2007 10:11 AM: >> Eric; >> >> Apologies if you are not the "Volunteer BASH Maintainer"; if not can you >> point me in the right direction to get it submitted properly? >> >> I have seen many postings about "default" extensions defined by PATHEXT. >> I have done a patch to support this. >> >> e.g. >> >> export >PATHEXT is a cmd.com feature, and does not have much precedence in Linux. > >> >> will add file types .ksh (e.g. xx.ksh) and .sh as a found file. This is >> personally helping me migrate away from MKS Korn Shell. >> >> The Patch >> ============================================================================== >> >> >> diff -Nur bash-3.2.postpatch/findcmd.c bash-3.2.new/findcmd.c >> --- bash-3.2.postpatch/findcmd.c 2007-09-04 16:19:46.019666300 +0100 >> +++ bash-3.2.new/findcmd.c 2007-09-06 13:40:19.172250000 +0100 >> @@ -50,6 +50,7 @@ >> static char *_find_user_command_internal __P((const char *, int)); >> static char *find_user_command_internal __P((const char *, int)); >> static char *find_user_command_in_path __P((const char *, char *, int)); >> +static char *find_user_command_in_path_orig __P((const char *, char *, >> int)); >> static char *find_in_path_element __P((const char *, char *, int, int, >> struct stat *)); >> static char *find_absolute_program __P((const char *, int)); >> >> @@ -525,12 +526,55 @@ >> FS_EXISTS: The first file found will do. >> FS_NODIRS: Don't find any directories. >> */ >> + >> +#define PATHEXT_SEP ";:" /* Separators for parsing PATHEXT */ > >I'd rather use just :, as in PATH, rather than defining PATHEXT_SEP; but >that may imply also patching cygwin1.dll to treat PATHEXT similarly to PATH. > >> static char * >> find_user_command_in_path (name, path_list, flags) >> const char *name; >> char *path_list; >> int flags; >> { >> + char *found_file; >> + char *pathext; >> + char *file_type; >> + char *trial_name; >> + int name_length; >> + SHELL_VAR *var; >> + >> +/* Use original lookup to find "name" and "name.exe" */ >> + found_file = find_user_command_in_path_orig(name, path_list, flags); >> + if(found_file) return (found_file); >> + >> +/* Not found, step through file types in PATHEXT */ >> +/* PATHEXT follows the Windows format - e.g. ".ksh;.sh;.cmd" */ >> + var = find_variable_internal("PATHEXT", 1); >> + if(var) >> + { >> + pathext = strdup(value_cell(var)); >> + name_length = strlen(name); >> + file_type = strtok(pathext, PATHEXT_SEP); >> + while(file_type) >> + { >> + trial_name = malloc(name_length + strlen(file_type) + 1); >> + strcpy(trial_name, name); >> + strcat(trial_name, file_type); >> + found_file = find_user_command_in_path_orig(trial_name, >> path_list, flags); >> + free(trial_name); >> + if(found_file) break; /* Found - break out of loop */ >> + file_type = strtok((char *)NULL, PATHEXT_SEP); >> + } >> + free(pathext); >> + } >> + return (found_file); >> + >> +} >> + >> +static char * >> +find_user_command_in_path_orig (name, path_list, flags) >> + const char *name; >> + char *path_list; >> + int flags; >> +{ >> char *full_path, *path; >> int path_index, name_len; >> struct stat dotinfo; >> >> End Patch >> ============================================================================== >> >> >> Hope this helps >> > >Thanks for the idea. However, I'm not sure I want to incorporate this >into cygwin at this time, without more support from cygwin1.dll, or at >least without more discussion on the list. I'm impressed with the patch but I don't think it really adheres to the philosophy of Cygwin or Linux. Also, the Cygwin DLL already has enough code to deal with extensions specially. We're not going to add more and feed the "Cygwin is slow" fodder. I really am sorry to have to reject the idea when the OP has already gone to some effort but I just don't see this happening. cgf -- Unsubscribe info: Problem reports: Documentation: FAQ:
https://sourceware.org/pipermail/cygwin/2007-September/160817.html
CC-MAIN-2020-40
refinedweb
532
53.81
#include <ClpNode.hpp> Collaboration diagram for ClpNode: Definition at line 16 of file ClpNode.hpp. Default constructor. Constructor from model. Destructor. The copy constructor. Applies node to model. Fix on reduced costs. Initial value of integer variable. Definition at line 30 of file ClpNode.hpp. References branchingValue_. Way for integer variable -1 down , +1 up. Return true if branch exhausted. Change state of variable i.e. go other way. Sequence number of integer variable (-1 if none). Definition at line 39 of file ClpNode.hpp. Does work of constructor (partly so gdb will work). Operator =. Initial value of integer variable. Definition at line 73 of file ClpNode.hpp. Referenced by branchingValue(). Factorization. Definition at line 75 of file ClpNode.hpp. Steepest edge weights. Definition at line 77 of file ClpNode.hpp. Status vector. Definition at line 79 of file ClpNode.hpp. Primal solution. Definition at line 81 of file ClpNode.hpp. Dual solution. Definition at line 83 of file ClpNode.hpp. Pivot variables for factorization. Definition at line 85 of file ClpNode.hpp. Variables fixed by reduced costs (at end of branch) 0x10000000 added if fixed to UB. Definition at line 87 of file ClpNode.hpp. State of branch. Definition at line 89 of file ClpNode.hpp. Sequence number of integer variable (-1 if none). Definition at line 91 of file ClpNode.hpp. Referenced by sequence(). Number fixed by reduced cost. Definition at line 93 of file ClpNode.hpp.
http://www.coin-or.org/Doxygen/CoinAll/class_clp_node.html
crawl-003
refinedweb
239
56.32
This is your resource to discuss support topics with your peers, and learn from each other. 02-18-2013 11:56 AM How do I put the Z10 on mute quickly prior to a meeting or during a meeting. The iPhone has a special button on the side that you can quickly slide down to mute. I don't see such a button on the Z10 and it seems that the only way is to go to settings everytime. Thank you Solved! Go to Solution. 02-18-2013 12:03 PM Try the volume control button on the side... set it to zero, then push it again... that works on mine (even when I don't want it to... ;-) ) 02-18-2013 12:07 PM 02-18-2013 12:09 PM Hi brock-n. Yes I suppose that is a solution but I am sure that they have thought of this. I was just looking in the manual and it seems that the middle button between the two volume toggles is a mute button. I think you need to hold it in for 2 seconds. I was wondering if you can try it out. Unfortunately I am having second thoughts about my Z10 since I have been a Mac user for years and some of my contacts and calendars are not coming in as I would have liked. I have packaged my Z10 for return (since I have a limited time to do so and wiped it clean). Could you try the suggestion above and let me know if it works. Thanks 02-18-2013 12:18 PM 02-18-2013 12:19 PM The middle button brings up voice control on my device, but I suspect there would be a setting somewhere to control what button does what. However, I find the 'down volume' button to be more logical. Push and hold to go to zero, then push once more to mute. That way you won't find yourself accidentally muting and missing alerts. I was having that problem with the Otterbox Defender, trying to get the phone in and out of the holster and hitting the volume button too many times. Returned that box and now am much happier. As well, you'd need a way to turn it back on again... 'up volume' does that for me and is also a logical method. If you're having issues with contacts etc., I suggest you try asking that question directly (if you haven't already). I've been very happy with how my information came over. I think I saw a comment somewhere about importing your data to gmail, then pulling it to the phone... that might be an option. 02-18-2013 12:24 PM Thank you brock-n and dmbenez10 for your quick responses and suggestions. I have closed this post. Brock-n, as you suggested, I have raised a new post for porting over my Mac contacts. I would really like to keep my z10. I have been following it since November and really like the new capabiities Blackberry has created for it. 02-18-2013 12:34 PM Something to remember... everything BlackBerry sells seems to be a work in progress... ;-) The Z10 is almost fully baked in my opinion, but my PlayBook sure wasn't! However, my point is not to criticize the product, but rather to remind you that they are always fixing and tweaking... what doesn't work the way you want today may very well magically change after an OTA update at some point in the future. Beta testing is one thing, but putting the device in the hands of thousands and thousands of people who WILL find every bizarre combination and permutation of actions to bring about a problem is really the way to improve the product... 02-18-2013 03:31 PM 02-18-2013 04:35 PM Thanks for the reply. Yes, I was aware of that method, but I was looking for a one button solution like on the iPhone. I believe some of the other solutions given above will also work (just keeping the down volume button pressed.)
https://supportforums.blackberry.com/t5/BlackBerry-Z10/Putting-a-Z10-on-mute/m-p/2173993/thread-id/4650
CC-MAIN-2016-50
refinedweb
693
81.83