text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Over the course of a few years, RGB LEDs are getting popular day by day due to its beautiful color, brightness, and enticing lighting effects. That is why It is used in many places as a decorative item, an example can be the home or an office space. Also, we can use the RGB lights in the kitchen and also in a gaming console. They’re also great in a kid’s playroom or bedrooms in terms of mood lighting. Previously, We used the WS2812B NeoPixel LEDs and the ARM Microcontroller to build a Music Spectrum Visualizer, so do check that out if that’s interesting to you. That is why in this project we are going to use a Neopixel Based RGB LED matrix shield, Arduino, and Blynk application to produce many fascinating animation effects and colors which we will be able to control with the Blynk app. So let’s get started!!! Adafruit 5X8 NeoPixel Shield for Arduino The Arduino compatible NeoPixel Shield contains forty individually addressable RGB LEDs each one has the WS2812b driver built-in, which is arranged in a 5×8 matrix to form this NeoPixel Shield. Multiple NeoPixel Shields can also be connected to form a larger Shield if that's a requirement. To control the RGB LEDs, a single Arduino pin is required, so in this tutorial, we have decided to use pin 6 of the Arduino to do so. In our case, the LEDs are powered from the Arduino’s inbuilt 5V pin, which is sufficient for powering about “a third of the LEDs” at full brightness. If you need to power more LEDs, then you can cut the inbuilt trace and use an external 5v supply to power the shield using the External 5V terminal. Understanding the Communication Process Between Blynk App and Arduino The 8*5 RGB LED matrix which is used here is having forty individually addressable RGB LEDs based on WS2812B driver. It has 24-bit color control and 16.8 million colors per pixel. It can be controlled with the “One wire control” methodology. That means we can control the whole LED pixel using a single control pin. While working with the LEDs, I have gone through the datasheet of these LEDs where I find the operating voltage range of the shield is 4 V to 6 V and the current consumption is found out 50 mA per LED at 5 V with red, green, and blue at full brightness. It is having Reverse-voltage protection on the external power pins and a Reset button on the Shield to reset the Arduino. It has also an External Power input pin for LEDs if a sufficient amount of power is not available through internal circuitry. As shown in the schematic diagram above, we need to download and install the Blynk application on our smartphone where the parameters such as color, brightness can be controlled. After setting up the parameters, if any changes happen to the app, it's to the Blynk cloud where our PC is also connected and ready to receive the updated data. The Arduino Uno is connected to our PC via USB cable with a communication port opened, with this communication port (COM Port), data can be exchanged between the Blynk cloud and Arduino UNO. PC is requesting data from Blynk cloud at constant time intervals and when an updated data is received, it transfers it to Arduino and makes user-defined decisions like controlling the RGB led brightness and Colors. The RGB LED shield is placed on the Arduino LED and connected via a single data pin for communication, by default it is connected via the D6 pin of Arduino. The serial data sent from Arduino UNO are sent to the Neopixel shied which is then reflected on the LED matrix. Components Required - Arduino UNO - 8*5 RGB LED Matrix shield - USB A/B cable for Arduino UNO - Laptop/PC Adafruit RGB LED Shield and Arduino - Hardware Connection The WS2812B Neopixel LEDs have three pins, one is for data and another two is used for power, but this specific Arduino shield makes the hardware connection very simple, all we have to do is place the Neopixel LED matrix on the top of Arduino UNO. In our case, the LED is powered from the default Arduino 5V Rail. After placing the Neopixel Shield, the setup looks like below: Configuring up the Blynk Application Blynk is an application that can run over Android and IOS devices to control any IoT devices and Appliances using our smartphones. First of all, a Graphical User Interface (GUI) needs to be created to control the RGB LED matrix. The application will send all the selected parameters from the GUI to the Blynk Cloud. In the receiver section, we have Arduino connected to the PC via a serial communication cable. Hence, PC requests data from the Blynk cloud, and these data are sent to Arduino for necessary processing. So, let’s get started with the Blynk application setup. Before the setup, download the Blynk Application from the Google Play store (IOS users can download from App Store). After installation, Sign-up using your email id and Password. Creating a New Project: After successful installation, open the application, and there we will get a screen with an option “New Project”. Click on it and it will pop up a new screen, where we need to set the parameters like Project name, Board, and connection type. In our project, select the device as “Arduino UNO” and connection type as “USB” and click on “Create”. After the successful creation of the Project, we will get an Authenticate ID in our registered mail. Save the Authenticate ID for future reference. Creating the Graphic User Interface (GUI): Open the project in Blynk, click on the “+” sign where we will get the widgets which we can use in our project. In our case, we need an RGB Color Picker which is listed as “zeRGBa” as shown below. Setting the Widgets: After dragging the widgets to our project, now we have to set its parameters which are used to send the color RGB values to Arduino UNO. Click on ZeRGBa, then we will get a screen named ZeRGBa setting. Then set the Output option to “Merge” and set the pin to “V2” which is shown in the image below. Arduino Code Controlling Adafruit WS2812B RGB LED Shield After completion of the hardware connection, the code needs to be uploaded to Arduino. The step by step explanation of the code is shown below. First, include all the required libraries. Open Arduino IDE, then go to the tab Sketch and click on the option Include Library-> Manage Libraries. Then search for Blynk in the search box and then download and install the Blynk package for Arduino UNO. Here “Adafruit_NeoPixel.h” library is used to control the RGB LED Matrix. To include it, You can download the Adafruit_NeoPixel library from the given link. Once you got that, you can include it with the Include ZIP Library option. #define BLYNK_PRINT DebugSerial #include <Adafruit_NeoPixel.h> #include <SoftwareSerial.h> Then we define the number of LEDs, which is required for our LED matrix, also we define the pin-number which is used to control the LED parameters. #define PIN 6 #define NUM_PIXELS 40 Then, we need to put our blink authentication ID in an auth[] array, which we have saved earlier. char auth[] = "HoLYSq-SGJAafQUQXXXXXXXX"; Here software serial pins are used as debug console. So, the Arduino pins are defined as debug serial below. #include <SoftwareSerial.h> SoftwareSerial DebugSerial(2, 3); Inside setup, Serial communication is initialized using function Serial.begin, blynk is connected using Blynk.begin and using pixels.begin(), the LED Matrix is initialized. void setup() { DebugSerial.begin(9600); pixels.begin(); Serial.begin(9600); Blynk.begin(Serial, auth); } Inside the loop (), we have used Blynk.run (), which checks for incoming commands from blynk GUI and executes the operations accordingly. void loop() { Blynk.run(); } In the final stage, the parameters which were sent from the Blynk application need to be received and processed. In this case, the parameters were allotted to a virtual pin “V2” as discussed earlier in the setup section. BLYNK_WRITE function is an inbuilt function that gets called whenever the associated virtual pin’s state/value changes. we can run code inside this function just like any other Arduino function. Here BLYNK_WRITE function is written to check for incoming data at virtual pin V2. As shown in the Blink setup section, the color pixel data were merged and assigned to the V2 pin. So we also have to de-merge again after decoding. Because to control the LED pixel matrix, we need all 3 individual color pixel data such as Red, green, and Blue. As shown in the code below, three indexes of the matrix were read like param[0].asInt() to get the value of Red color. Similarly, all the other two values were received and stored in 3 individual variables. Then these values are assigned to the Pixel matrix using pixels.setPixelColor function as shown in the code below. Here, pixels.setBrightness() function is used to control the brightness and pixels.show() function is used to display the set colour in the Matrix.(); } Uploading the Code to Arduino Board First, we need to select the PORT of the Arduino inside the Arduino IDE, then we need to upload the code into Arduino UNO. After a successful upload, note down the Port-Number which will be used for our serial communication setup. After this, find the script folder of the Blynk library on your PC. It gets installed when you install the library, mine was in, “C:\Users\PC_Name\Documents\Arduino\libraries\Blynk\scripts” In the script folder, there should a file named “blynk-ser.bat” which is a batch file used for serial communication which we need to edit with notepad. Open the file with notepad and change the Port number to your Arduino Port number which you have noted in the last step. After editing, save the file and run the batch file by double-clicking on it. Then, you must be seeing a window like shown below: Note: If you are not able to see this window shown above and it is prompted to reconnect, then it might be due to the error in connection of PC with the Arduino shield. In that case, check your Arduino connection with the PC. After that, check if the COM port number is showing in the Arduino IDE or not. If it is showing the valid COM port, then it is ready to proceed. You should run the batch file again. Final Demonstration: Now, it’s time for testing the circuit and its functionality. Open the Blynk application and open the GUI and click on the Play button. After that, you can select any of your desired colors to be reflected on the LED Matrix. As shown below, in my case I have selected the Red and Blue color, it is being displayed on the Matrix. Similarly, you can also try to make different animations using these LED matrices by customizing the coding a bit. #define BLYNK_PRINT DebugSerial #include <Adafruit_NeoPixel.h> #include <SoftwareSerial.h> SoftwareSerial DebugSerial(2, 3); #include <BlynkSimpleStream.h> char auth[] = "-54csCxRMCSyHxxxxxxxxx"; #define PIN 6 #define NUM_PIXELS 40 Adafruit_NeoPixel pixels(NUM_PIXELS, PIN, NEO_GRB + NEO_KHZ800); void setup() { DebugSerial.begin(9600); pixels.begin(); Serial.begin(9600); Blynk.begin(Serial, auth); } void loop() { Blynk.run(); }(); }
https://circuitdigest.com/microcontroller-projects/controlling-ws2812b-rgb-led-shield-with-arduino-and-blynk
CC-MAIN-2020-50
refinedweb
1,915
62.07
That would definitely be useful for me. My game will have quite a few characters. Discuss audio games! You are not logged in. Please login or register. AudioGames.net Forum → Developers room → Heat Engine, a game engine for BGT games That would definitely be useful for me. My game will have quite a few characters. The map editor will able you to place actors on the map. A contextual menu will then be added to edit actors and modify every configurable attributes that they have. As soon as you will set a new attribute for your own actors, the editor will be able to place them and let you edit all its attributes. What around calling a "character editor"? Characters are normal actors, they just have more attributes, including footstep sounds, moving speed and all. If you are talking about dialogs, you have to know that H.E. has currently no dialog system implemented (I can't do anything by myself in just some weeks!). I'm currently struglling with opening/saving dialog boxes; no one made a correct system, at least correct enough to be included. Ah okay. So we code the actors, but the map editor is something like the interface builder of HE, when completed. That will be awesome, if it eventually gets to that point. Menus design is already done, I have been slowed by the need to program dialog boxes like choosing a file/dir to open/save. I have almost finished to make them. After that, I'll finish to rework the geometry system (quite simple in fact), and then I'll be able to release a simple version of the editor with updated engine Hi, Just wondering how things are going. You've been pretty quiet for the past couple months. Hope the project isn't giving you too much trouble! How i can use the inventory function in my game, include with the Heatengine? Someone help-me. Hi! Sorry Imaginatrix, I've been very busy theses days (I'm programing other games and I have to work ), and yes, Heat Engine is giving me troubles. For a lot of reasons, including: - the geometry representation (maps and all) is taking more time than expected to rework - i'm seriously planning to port Heat Engine on Java and stop BGT support Why? Because Java is fully object oriented, quite easy to understand (classes and interface, all that stuff), much more powerful, cross platform, providing a huge standart library, etc. My only problem is the sound and screen readers support, an d rewriting everything in another language (even if the code will be much simpler to understand for everyone). BGT limits me. Right now, this projects looks like a dead end if I don't change the language I think. Heat Engine has been designed to be powerful and for more complex games, and BGT isn't made for that. I may keep the editor in BGT and make the @pedrowille : this engine is quite complex, but you can dive inside the files and try to adapt the Inventory actor for your own game. If you want to make a game with the engine itself, given the help you asked in other threads, I don't think it is adapted for a complete beginner That's a shame. BGT seemed to do about everything I wanted it to, but it apparently struggles with certain things. Even so, I love it simply for the screen-reader support and will try and work out how to code things by hand and get them working. Python's still an option for me but I just want to make games, not learn everything there is to know about programming and get bored. It's ok Genroa, not problem. i think i''m sade, but i have trying make a game with my codes. but i need some help, and i not have it in my country or my languagem, and in english languagem, i need a help... Bgt is far from the only option for screen reader support. The Tolk library provides this functionality and has bindings available for numerous languages, including pure basic, c/c++, java, c# etc. Imaginatrix, Heat Engine in Java (if I do it) will provide the same built-in screen reader support and sound support So I'd have to learn Java? I started learning the basics of Python but that got boring fast. If I get bored learning how to do something, I won't be motivated to keep at it. That's why I like BGT so much—it focuses on gaming and gaming alone. BGT has a C/C++/Java like syntax. The syntax is almost the same, juste a little more simple (there's no handle system). and a lot of ready to use classes and tools will be available, far more than the current BGT version BGT version (Robots Factory main file): #include "HeatEngine\\Game.bgt" Game game("Robot Factory Demo"); void main() {!"); game.scene.setSoundEnvironment(); if(file_exists("Saves\\save.hemap")) game.config.setMapFile("Saves\\save.hemap"); else game.config.setMapFile("Maps\\level1.hemap"); game.load(); game.setListener("player"); [email protected] c1 = FactoryControler("player", true); int player1 = game.addLocalControler(c1); game.scene.generateGeometry(); game.run(); [email protected] score = cast<Score>(game.scene.getActorByName("score")); [email protected] rb = cast<RobotBuilder>(game.scene.getActorByName("rb")); alert("Score", score.getGoodEscaped()+" good robots were left intact on "+(rb.maxRobots-rb.totalFaulty)+"\n"+score.getBadEscaped()+" bad robot escaped on "+rb.totalFaulty+"!!"); } Java version of the same file: import HeatEngine.Game; class MyGame extends Game { public void static mainString[] args) { Game myGame = new MyGame("Robot Factory Demo"); GameSystem!"); myGame.load("Saves"+File.separator+"save.hemap"); myGame.setListener("player"); myGame.addController(new FactoryController("player", "c1", true)); myGame.run(); FactoryScore score = (FactoryScore) myGame.getScoreObject(); RobotBuilder rb = (RobotBuilder) myGame.getScene().getActorByName("rb"); alert("Score", score.getGoodEscaped()+" good robots were left intact on "+(rb.maxRobots-rb.totalFaulty)+"\n"+score.getBadEscaped()+" bad robot escaped on "+rb.totalFaulty+"!!"); } } There's not much changes! And one other cool thing, for people who may have tried to use the BGT version of the engine : creator a new custom actor required to override a lot of functions. In the Java version, there won't be anything else to do instead, thanks to reflection! Just finished a little Java lib using lwjgl JOAL, providing 3D sound based on the openAL technology, allowing to play sounds in two lines. Easier than I thought, I'm asking myself why I used BGT so much time! ...I am now considering going to Java. A little main method playing a looping sound, using my little lib: SoundSystem.initialize(); Sound testSound = new Sound("sounds\\good_robot.wav"); testSound.playLooped(); Thread.sleep(10000); SoundSystem.destroy(); There are methods letting you choosing the sound position and velocity (for doppler effet) with 3d vectors; one line for each method. Hey all. I am trying to learn the actual BGT language, but it looks like this, the BGT toolkit, is just something to compile with. How can I actually learn the language? There's a tutorial in the manual I thought this thing was dead. Where can I download it? The site in post #1 is down. how can i download this thing,all the links are not work Well help time. I got this one a static download link. Let's go downloading! I'm actually impressed anyone tried to use my project. When I released it, I almost never had any feedback. If someone still want to use it, I still have the latest up to date demos and archives. I would be interested in trying this, if it is easy to write. Although, I am looking to create an interactive-fiction type audio game, but yeah, I would give this a shot. I've actually been trying to find a working link to this project for a while now. I'd love to try and see what i can make with it. If anyone has it uploaded somewhere, that would be great. This engine is far from finished, but to prove you can already do quite some things with it in BGT, I built one little demo game and another demo showing a first person controller with firearms to use and test. The demo is complete with weapon's ammunition, reloading behavior and all. Here are new links for every broken link: Link to the engine: Link to engine source Link to the engine documentation: Link to engine doc Link to the factory game demo: Link to factory game demo Link to the weapon demo: Link to weapon demo i will post here more informations about the demos and the engine, if you want. If you decide to use it, even though it's written in BGT and the current stable version hasn't implemented full 3D sound with FMOD yet, I can help you in your project. But I would love to build a project with the people on this forum to build a new engine, on a more moderne and maintained language... AudioGames.net Forum → Developers room → Heat Engine, a game engine for BGT games
http://forum.audiogames.net/viewtopic.php?id=15395&p=4
CC-MAIN-2018-39
refinedweb
1,525
64.81
Getting Started with ROS Robotics is one of the upcoming technologies that can change the world. Robots can replace people in many ways, and we are all afraid of them stealing our jobs. One thing is for sure: robotics will be one of the influential technologies of the future. When a new technology gains momentum, the opportunities in that field also increase. This means that robotics and automation can generate a can the Robot Operating System (ROS). In this chapter, we will take a look at an abstract concept of ROS and how to install it, along with an overview of simulators and its use on virtual systems. We will then cover the basic concepts of ROS, along with the different robots, sensors, and actuators that support ROS. We will also look at ROS with respect to industries and research. This entire book is dedicated to ROS projects, so this chapter will be a kick-start guide for those projects and help you set up ROS. The following topics are going to be covered in this chapter: - Getting started with ROS - Fundamentals of ROS - ROS client libraries - ROS tools - ROS simulators - Installing ROS - Setting up ROS on VirtualBox - Introduction to Docker - Setting up the ROS workspace - Opportunities for ROS in industries and research So, let's get started with ROS. Technical requirements Let's look into the technical requirements for this chapter: - ROS Melodic Morenia on Ubuntu 18.04 (Bionic) - VMware and Docker - Timelines and test platform: - Estimated learning time: On average, 65 minutes - Project build time (inclusive of compile and run time): On average, 60 minutes - Project test platform: HP Pavilion laptop (Intel® Core™ i7-4510U CPU @ 2.00 GHz × 4 with 8 GB Memory and 64-bit OS, GNOME-3.28.2) they're. This active developer ecosystem distinguishes ROS from other robotic frameworks. In short, ROS is the combination of Plumbing (or communication), Tools, Capabilities, and Ecosystem. These capabilities are demonstrated in the following diagram: The ROS project was started in 2007 at Stanford University under the name Switchyard. Later on, in 2008, the development was undertaken by a robotic research startup called Willow Garage. The major development in ROS happened in Willow Garage. In 2013, the Willow Garage researchers formed the Open Source Robotics Foundation (OSRF). ROS is actively maintained by OSRF now. Now, let's look at a few ROS distributions. their respective Ubuntu versions. The following are some of the latest ROS distributions (at the time of writing) that are recommended for use from the ROS website (): The latest ROS distribution is Melodic Morenia. We will get support for this distribution up until May 2023. One of the problems with this latest ROS distribution is that most of the packages will not be available on it because it will take time to migrate them from the previous distribution. If you are looking for a stable distribution, you can go for ROS Kinetic Kame because the distribution started in 2016, and most of the packages are available on this distribution. The ROS Lunar Loggerhead distribution will stop being supported in May 2019, so I do not recommend that you use it. Supported OSes The main OS ROS is tuned for is Ubuntu. ROS distributions are planned according to Ubuntu releases. Other than Ubuntu, it is partially supported by Ubuntu ARM, Debian, Gentoo, macOS, Arch Linux, Android, Windows, and OpenEmbedded. This table shows new ROS distributions and the specific versions of the supporting OSes: ROS Melodic and Kinetic are Long-Term Support (LTS) distributions that come with the LTS version of Ubuntu. The advantage of using LTS distribution is that we will get maximum lifespan and support. We will look at a few robots and sensors that are supported by ROS in the next section. Robots and sensors supported by ROS The ROS framework is one of the most successful robotics frameworks, and universities around the globe contribute to it. Because of its active ecosystem and open source nature, ROS is being used in a majority of robots and is compatible with major robotic hardware and software. Here are some of the most famous robots completely running on ROS: The names of the robots listed in preceding images are Pepper (a), REEM-C (b), Turtlebot (c), Robonaut (d), and Universal Robots (e). The robots supported by ROS are listed at the following link:. The following are the links where you can get the ROS packages of these robots: - Pepper: - REEM-C: - Turtlebot 2: - Robonaut: - Universal robotic arms: Some popular sensors that support ROS are as follows: The names of the sensors in the preceding image are Velodyne (a), ZED Camera (b), Teraranger (c), Xsens (d), Hokuyo Laser range finder (e), and Intel RealSense (f). The list of sensors supported by ROS is available at the following link:. The following are the links to the ROS wiki pages of these sensors: - Velodyne (a): - ZED Camera (b): - Teraranger (c): - Xsens (d): - Hokuyo Laser range finder (e): - Intel real sense (f): Now, let's look at the advantages of using ROS. Why use've already discussed, ROS is open source and free to use for industries and research. Developers can expand the functionalities of ROS by adding packages. Almost all ROS packages work on a hardware abstraction layer, so it can be reused easily for other robots. So, if one university is good in mobile navigation and another is good in robotic manipulators, they can contribute that to the ROS community and other developers can reuse their packages and build new applications. - Language support: The ROS communication framework can be easily implemented in any modern programming language. It already supports popular languages such as C++, Python, and Lisp, and it has experimental libraries for Java and Lua. - Library integration: ROS has an interface to many third-party robotics libraries, such as Open Source Computer Vision (OpenCV), Point Cloud Library (PCL), Open-NI, Open-Rave, and Orocos. Developers can work with any of these libraries without much hassle. - already discussed, ROS is completely open source and free, so we can customize this framework as per the robot's requirements. If we only want to work with the ROS messaging platform, we can remove all of the other components and use only that. We can even customize ROS for a specific robot for better performance. - Community: ROS is a community-driven project, and it is mainly led by OSRF. The large community support is a great plus for ROS and means we can easily start robotics application development. The following are the URLs of libraries and simulators that can be integrated with ROS: - Open-CV: - PCL: - Open-NI: - Open-Rave: - Orocos: - V-REP: Let's go through some of the basic concepts of ROS; these that a topic is missing in this chapter, then rest assured that it will be covered in a corresponding chapter later. There are three different concepts in ROS. Let's take a look at them. The filesystem level The filesystem level explains how ROS files are organized on the hard disk: As you can see from the preceding diagram, the filesystem in ROS can be categorized mainly as metapackages, packages, package manifest, messages, services, codes, and miscellaneous files. The following is a short description of each component: - Metapackages: Metapackages group a list of packages for a specific application. For example, in ROS, there is a metapackage called navigation for mobile robot navigation. It can hold the information of related packages and helps install those packages during its own installation. - Packages: The software in ROS is mainly organized as ROS packages. We can say that ROS packages are the atomic build units of ROS. A package may consist of ROS nodes/processes, datasets, and configuration files, all organized in a single module. - Package manifest: Inside every package will be a manifest file called package.xml. This file consists of information such as the name, version, author, license, and dependencies that are. Here, we are going to follow a convention where we put the message files under our_package/msg/message_files.msg. - Service (srv): One of the computation graph level concepts is services. Similar to ROS messages, the convention is to put service definitions under our_package/srv/service_files.srv. This sums up the ROS filesystem. The computation graph level The ROS computation graph is a peer-to-peer based network that processes all the information together. The ROS graph concept constitutes nodes, topics, messages, master, parameter server, services, and bags: The preceding diagram shows the various concepts in the ROS computational graph. Here is a short description of each concept: - Nodes: ROS nodes are simply processes that use ROS APIs to communicate with each other. A robot may have many nodes to perform its computations. For example, an autonomous mobile robot may have a node each for hardware interfacing, reading laser scan, of the details about all the nodes running in the ROS environment. It will exchange details of one node with another to establish a connection between them. After exchanging the methods, nodes send and receive data in the form of ROS messages. The ROS message is a data structure that's another node can read from the topic by subscribing to it. - Services: Services are another kind of communication method, similar to copy the bag file to other computers to inspect data by playing it back. This sums up the computational graph concept. The ROS community level The ROS community has grown more now compared to the time it was introduced. You can find at least 2,000+ packages being supported, altered, and used by the community actively. The community level comprises the ROS resources for sharing software and knowledge: Here is a brief description of each section: - Distributions: ROS distributions are versioned collections of ROS packages, such as Linux distributions. - Repositories: ROS-related packages and files depend on a Version Control System (VCS) such as Git, SVN, and Mercurial, using which developers around the world can contribute to the packages. - ROS Wiki: The ROS community wiki is the knowledge center of ROS, in which anyone can create documentation of their packages. You can find standard documentation and tutorials about ROS on the ROS wiki. - Mailing lists: Subscribing (). Now, let's learn how communication is carried out in ROS in the next section. Communication in ROS Let's learn how two nodes communicate with each other using ROS topics. The following diagram shows how this happens: As you can see, there are two nodes named talker and listener. The talker node publishes a string message called Hello World into a topic called /talker, while the listener node is subscribed to this topic. Let's see what happens at each stage, marked (1), (2), and (3) in the preceding diagram: - Before running any nodes in ROS, we should start the ROS Master. After it has been started, it will wait for nodes. When the talker node (publisher) starts running, it will connect to the ROS Master and exchange the publishing topic details with the master. This includes the topic name, message type, and publishing node URI. The URI of the master is a global value, and all the nodes can connect to it. The master maintains the tables of the publisher connected to it. Whenever a publisher's details change, the table updates automatically. - When we start the listener node (subscriber), it will connect to the master and exchange the details of the node, such as the topic going to be subscribed to, its message type, and the node URI. The master also maintains a table of subscribers, similar to the publisher. - Whenever there is a subscriber and publisher for the same topic, the master node will exchange the publisher URI with the subscriber. This will help both nodes to connect and exchange data. After they've connected, there is no role for the master. The data is not flowing through the master; instead, the nodes are interconnected and exchange messages. More information on nodes, their namespaces, and usage can be found here:. Now that we know the fundamentals of ROS, let's look at a few ROS client libraries. ROS client libraries ROS client libraries are used to write ROS nodes. All of the ROS concepts are implemented in client libraries. So, we can just use it without implementing everything from scratch. We can implement ROS nodes with a publisher and subscriber, its ease of prototyping, which means that development time isn't as long. It is not recommended for high-performance applications, but it is perfect for non-critical tasks. - roslisp: This is the client library for LISP and is commonly used to build robot planning libraries. Details of all the client ROS libraries can be found at the following link:. The next section will give us an overview of different ROS tools. ROS tools ROS has a variety of GUI and command-line tools to inspect and debug messages. These tools come in handy when you're working in a complex project involving a lot of package integrations. It would help to identify whether the topics and messages are published in the right format and are available to the user as desired. Let's look at some commonly used ones. ROS Visualizer (RViz) RViz () is one of the 3D visualizers available in ROS that can visualize 2D and 3D values from ROS topics and parameters. RViz helps to visualize data such as robot models, robot 3D transform data (TF), point cloud, laser and image data, and a variety of different sensor data: The preceding screenshot shows a 3D point cloud scan from a Velodyne sensor placed on an autonomous car. rqt_plot The rqt_plot program () is a tool for plotting scalar values that are in the form of ROS topics. We can provide a topic name in the Topic box: The preceding screenshot is a plot of a pose from the turtle_sim node. rqt_graph The rqt_graph () ROS GUI tool can visualize the graph of interconnection between ROS nodes: The complete list of ROS tools is available at the following link:. Since we now have a brief idea of the ROS tools, we can cover different ROS simulators. ROS simulators One of the open source robotic simulators tightly integrated with ROS is Gazebo (). Gazebo is a dynamic robotic simulator that has a wide variety of robot models and extensive sensor support. The functionalities of Gazebo can be added via plugins. The sensor values can be accessed by ROS through topics, parameters, and services. Gazebo can be used when your simulation needs full compatibility with ROS. Most of the robotics simulators are proprietary and expensive; if you can't afford it, you can use Gazebo directly without any issues: The preceding is a PR2 robot model from OSRF. You can find the model at, in the description folder. Now that we know about the simulators of ROS, we can begin installing ROS Melodic on Ubuntu. Installing ROS Melodic on Ubuntu 18.04 LTS As we have already discussed, there are a variety of ROS distributions available to download and install, so choosing the exact distribution for our needs may be confusing. The following are the answers to some of the questions that are asked frequently while choosing a distribution: - Which distribution should I choose to get maximum support? Answer: If you are interested in getting maximum support, choose an LTS release. It will be good if you choose the second-most recent LTS distribution. - I need the latest features of ROS; which should I choose? Answer: Go for the latest version; you may not get the latest complete packages immediately after the release. You may have to wait a few months after the release. This is because of the migration period from one distribution to another. In this book, we are dealing with two LTS distributions: ROS Kinetic, which is a stable release, and ROS Melodic, the latest one. Our chapters will use ROS Melodic Morenia. Getting started with the installation Go to the ROS installation website (). You will see a screen listing the latest ROS distributions: You can get the complete installation instructions for each distribution if you click on ROS Kinetic Kame or ROS Melodic Morenia. We'll now step through the instructions to install the latest ROS distribution. Configuring Ubuntu repositories We are going to install ROS Melodic on Ubuntu 18.04 from the ROS package repository. The repository will have prebuilt binaries of ROS in .deb format. To be able to use packages from the ROS repository, we have to configure the repository options of Ubuntu first. The details of the different kinds of Ubuntu repositories can be found at. To configure the repository, perform the following steps: - First, search for Software & Updates in the Ubuntu search bar: - Click on Software & Updates and enable all of the Ubuntu repositories, as shown in the following screenshot: Now that we've enabled the preceding set conditions, we can move on to the next step. Setting up source.list The next step is to allow ROS packages from the ROS repository server, called packages.ros.org. The ROS repository server details have to be fed into source.list, which is in /etc/apt/. The following command will do this for ROS Melodic: $ sudo sh -c 'echo "deb $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list' Let's set up the keys now. 421C365BD9FF1F717815A3895523BAEEB01FA116 Now, we are sure that we are downloading from an authorized server. Installing ROS Melodic Now, we are ready to install ROS packages on Ubuntu. Follow these steps to do so: - The first step is to update the list of packages on Ubuntu. You can use the following command to update the list: $ sudo apt-get update This will fetch all the packages from the servers that are in source.list. - After getting the package list, we have to install the entire ROS package suite using the following command: $ sudo apt-get install ros-melodic the dependencies of the packages that we are going to compile. This tool is also necessary for some core components of ROS. This command launches rosdep: $ sudo rosdep init $ rosdep update Here, while the first command was called, a file called 20-default.list was created in /etc/ros/rosdep/sources.list.d/, with a list of links that connect to the respective ros-distros. Setting up /opt/ros/<ros_version>/setup.bash file. Here's the command to do so: $ source /opt/ros/melodic/setup.bash But in order to get the ROS environment in multiple Terminals, we should add the command to the .bashrc script, which is in the home folder. The .bashrc script will be sourced whenever a new Terminal opens: $ echo "source /opt/ros/melodic successful by running the following commands: - Open a Terminal window and run the roscore command: $ roscore - Run a turtlesim node in another Terminal: $ rosrun turtlesim turtlesim_node If everything is correct, you will see the following output: If you respawn the turtlesim node a couple of times, you should see the turtle changing. We have now successfully installed ROS on Ubutu. Now, let's learn how to set up ROS on VirtualBox. Setting up ROS on VirtualBox As you know, complete ROS support is only present on Ubuntu. So, what about Windows and macOS. You can download VirtualBox for popular OSes from the following link:. The complete installation procedure for Ubuntu on VirtualBox is shown in the following tutorial video on YouTube:. The following is a screenshot of the VirtualBox GUI. You can see the virtual OS list on the left-hand side and the virtual PC configuration on the right-hand side. The buttons for creating a new virtual OS and starting the existing VirtualBox are on the top panel. The optimal virtual PC configuration is shown in the following screenshot: Here are the main specifications of the virtual PC: - Number of CPUs: 1 - RAM: 4 GB - Display | Video Memory: 128 MB - Acceleration: 3D - Storage: 20 GB to 30 GB - Network adapter: NAT In order to have hardware acceleration, you should install drivers from the VirtualBox Guest addons disc. After booting into the Ubuntu desktop, navigate to Devices | Insert Guest Addition CD Image. This will mount the CD image in Ubuntu and ask the user to run the script to install drivers. If we allow it, it will automatically install all. We now have ROS set up on VirtualBox. The next section is an introduction to Docker. Introduction to Docker Docker is a piece of free software and the name of the company that introduced it to open source community. You might have heard of virtual environments in Python, where you can create isolated environments for your projects and install dedicated dependencies that do not cause any trouble with other projects in other environments. Docker is similar, where we can create isolated environments for our projects called containers. Containers work like virtual machines but aren't actually similar to virtual machines. While virtual machines need a separate OS on top of the hardware layer, containers do not and work independently on top of the hardware layer, sharing the resources of the host machine. This helps us consume less memory and it is often speedier than virtual machines. The best example to show the difference between both is shown here: Now that we know the difference between a virtual machine and Docker, let's understand why we use Docker. Why Docker? In ROS, a project may consist of several metapackages that contain subpackages, and those would need dependencies to work on. It could be quite annoying for a developer to set up packages in ROS as it is quite common that different packages would use different or the same dependencies of different versions and those could lead to compilation issues. The best example would be when we want to use OpenCV3 with ROS Indigo while working with vision algorithms or gazebo_ros_controller packages with different plugin versions, causing the famous gravity error (). By the time the developer tries to rectify them, he/she might end up losing other working projects due to replaced packages or dependency version changes. While there might be different ways to handle this problem, a practical way to go about this problem in ROS would be to use Docker containers. Containers are fast and can start or stop, unlike any process in an OS in a matter of seconds. Any upgrades or updates on the OS, or packages would not affect the containers inside or other containers in place. Installing Docker Docker can be installed in two ways: using the Ubuntu repositories or using the official Docker repository: - If you would just like to explore and save a couple of minutes with just a single line installation, go ahead and install from the Ubuntu repository. - If you would like to explore more options with Docker other than what is intended in this book, I would suggest that you go ahead and install from official Docker repositories as they will have the most stable and bug-fixed packages with added features. Installing from the Ubuntu repository To install Docker from the Ubuntu repository, use the following command: $ sudo apt-get install docker.io If you've changed your mind and would like to try out installing from the Docker repository or if you wish to remove the Docker version you installed via the preceding step, move on to the next step. Removing Docker If you're not interested in the old Docker version and want to install the latest stable version, remove Docker using the following command and install it from the Docker repository: $ sudo apt-get remove docker docker-engine docker.io containerd runc Installing from the Docker repository To install Docker from the official repository, follow these steps: - First, we use the following command: $ sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common - Then, we add the official GPG key from Docker: $ curl -fsSL | sudo apt-key add - - Set up the Docker repository using the following command: $ sudo add-apt-repository "deb [arch=amd64] bionic stable" - Update the apt package index once again: $ sudo apt-get update - Now, install the Docker package using the following command: $ sudo apt install docker-ce - After installing via either method, you could check the versions of Docker for both types of installation using the following command: $ docker --version The current version that is available in the Ubuntu repository is 17.12, while the latest release version at the time of writing this book is 18.09 (stable version). Docker can only be run as a root user by default. Hence, add your username to the Docker group using the following command: $ sudo usermod -aG docker ${USER} Ensure you reboot the system for the preceding to take effect; otherwise, you will face a permission denied error, as shown here: A quick fix to the preceding error would be to use sudo before any Docker commands. Working with Docker Containers are built from Docker images and these images can be pulled from Docker Hub (). We can pull ROS containers from the ros repository using the following command: $ sudo docker pull ros:melodic-ros-core If everything is successful, you see the output shown here: You can choose the specific version of ROS you want to work with. The best suggestion for any application is to start with melodic-core, where you would continue to work and update the container related to your project goal and not have other unnecessary components installed. You can view Docker images using this command: $ sudo docker images By default, all of the containers are saved in /var/lib/docker. Using the preceding command, you can identify the repository name and tag. In my case, for the ros repository name, my tag was melodic-ros-core; hence, you could run the ros container using the following command: $ sudo docker run -it ros:melodic-ros-core Other information the $ docker images command gives is the container ID, which is 7c5d1e1e5096 in my case. You will need it when you want to remove the container. Once you're inside Docker, you can check the ROS packages that are available using the following command: $ rospack list When you run and exit Docker, you would've created another container, so for beginners, it's quite common to create a list of containers unknowingly. You could use $ docker ps -a or $ docker ps -l to view all active/inactive containers or the latest container and remove containers using $ docker rm <docker_name>. To continue working in the same container, you could use the following command: $ sudo docker start -a -i silly_volhard Here, silly_volhard is the default name created by Docker. Now that you've opened the same container, let's install an ROS package and commit changes to the Docker. Let's install the actionlib_tutorials package using the following command: $ apt-get update $ apt-get install ros-melodic-actionlib-tutorials Now, when you check the ROS packages list once again, you should be able to view a few extra packages. Since you have modified the container, you would need to commit it to experience the modifications while reopening the Docker image. Exit the container and commit using the following command: $ sudo docker commit 7c5d1e1e5096 ros:melodic-ros-core Now that we have installed ROS on Ubuntu and VirtualBox, let's learn how to set up the ROS workspace. Setting up the ROS workspace After setting up ROS on a real PC, VirtualBox, or Docker,: - The first step is to create an empty workspace folder and another folder called src to store the ROS package in. The following command will do this for us. The workspace folder name here is catkin_ws: $ mkdir -p catkin_ws/src - Switch to the src folder and execute the catkin_init_workspace command. This command will initialize a catkin workspace in the current src folder. We can now start creating packages inside the src folder: $ cd ~/catkin_ws/src $ catkin_init_workspace - After initializing the catkin workspace, we can build the packages inside the workspace using the catkin_make command. We can also build the workspace without any packages: $ cd ~/catkin_ws/ $ catkin_make - This will create additional folders called build and devel inside the ROS workspace: - Once you've built the workspace, to access packages inside the workspace, we should add the workspace environment to our .bashrc file using the following command: $ echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc $ source ~/.bashrc - When everything is done, you can verify that everything is correct by executing the following command: $ echo $ROS_PACKAGE_PATH This command will print the entire ROS package path. If your workspace path is in the output, you are done: You will see that two locations are sourced as ROS_PACKAGE_PATH. The former is the recent edition we made in step 5 and the latter is the actual ROS installed packages folder. With this, we have set up the ROS workspace. We will now look at the different opportunities for ROS in industries and research. for programming all kinds of robots. So, robots in universities and industries mainly use ROS. Here are some famous robotics companies using ROS for their robots: - Fetch Robotics: - Clearpath Robotics: - PAL Robotics: - Yujin Robot: - DJI: - ROBOTIS: Knowledge of ROS will help you land a robotics application engineering job easily. If you go through the skillset of any job related to robotics, you're bound to find ROS on it. There are independent courses and workshops in universities and industries to teach ROS development in robots. Knowing ROS will help you to get an internship and MS, Ph.D., and postdoc opportunities from prestigious robotic institutions such as CMU's Robotics Institute () and UPenn's GRAP lab (). The following chapters will help you build a practical foundation and core skills in ROS. Summary This chapter was an introductory chapter to get you started with robotics application development using ROS. The main aim of this chapter was to get you saw that a lot of companies and universities are looking for ROS developers for different robotics applications. From the next chapter onward, we will discuss ROS-2 and its capabilities.
https://www.packtpub.com/product/ros-robotics-projects-second-edition/9781838649326
CC-MAIN-2020-50
refinedweb
5,020
51.48
Dask on Ray¶ Dask is a Python parallel computing library geared towards scaling analytics and scientific computing workloads. It provides big data collections that mimic the APIs of the familiar NumPy and Pandas libraries, allowing those abstractions to represent larger-than-memory data and/or allowing operations on that data to be run on a multi-machine cluster, while also providing automatic data parallelism, smart scheduling, and optimized operations. Operations on these collections create a task graph, which is executed by a scheduler. Ray provides a scheduler for Dask (dask_on_ray) which allows you to build data analyses using Dask’s collections and execute the underlying tasks on a Ray cluster. dask_on_ray uses Dask’s scheduler API, which allows you to specify any callable as the scheduler that you would like Dask to use to execute your workload. Using the Dask-on-Ray scheduler, the entire Dask ecosystem can be executed on top of Ray. Note We always ensure that the latest Dask versions are compatible with Ray nightly. The table below shows the latest Dask versions that are tested with Ray versions. Scheduler¶ The Dask-on-Ray scheduler can execute any valid Dask graph, and can be used with any Dask .compute() call. Here’s an example: import ray from ray.util.dask import ray_dask_get, enable_dask_on_ray, disable_dask_on_ray import dask.array as da import dask.dataframe as dd import numpy as np import pandas as pd # Start Ray. # Tip: If connecting to an existing cluster, use ray.init(address="auto"). ray.init() d_arr = da.from_array(np.random.randint(0, 1000, size=(256, 256))) # The Dask scheduler submits the underlying task graph to Ray. d_arr.mean().compute(scheduler=ray_dask_get) # Use our Dask config helper to set the scheduler to ray_dask_get globally, # without having to specify it on each compute call. enable_dask_on_ray() df = dd.from_pandas( pd.DataFrame( np.random.randint(0, 100, size=(1024, 2)), columns=["age", "grade"]), npartitions=2) df.groupby(["age"]).mean().compute() disable_dask_on_ray() # The Dask config helper can be used as a context manager, limiting the scope # of the Dask-on-Ray scheduler to the context. with enable_dask_on_ray(): d_arr.mean().compute() ray.shutdown() Note For execution on a Ray cluster, you should not use the Dask.distributed client; simply use plain Dask and its collections, and pass ray_dask_get to .compute() calls, set the scheduler in one of the other ways detailed here, or use our enable_dask_on_ray configuration helper. Follow the instructions for using Ray on a cluster to modify the ray.init() call. Why use Dask on Ray? - To take advantage of Ray-specific features such as the launching cloud clusters and shared-memory store. If you’d like to use Dask and Ray libraries in the same application without having two different clusters. If you’d like to create data analyses using the familiar NumPy and Pandas APIs provided by Dask and execute them on a fast, fault-tolerant distributed task execution system geared towards production, like Ray. Dask-on-Ray is an ongoing project and is not expected to achieve the same performance as using Ray directly. All Dask abstractions should run seamlessly on top of Ray using this scheduler, so if you find that one of these abstractions doesn’t run on Ray, please open an issue. Best Practice for Large Scale workloads¶ For Ray 1.3, the default scheduling policy is to pack tasks to the same node as much as possible. It is more desirable to spread tasks if you run a large scale / memory intensive Dask on Ray workloads. In this case, there are two recommended setup. - Reducing the config flag scheduler_spread_threshold to tell the scheduler to prefer spreading tasks across the cluster instead of packing. - Setting the head node’s num-cpus to 0 so that tasks are not scheduled on a head node. # Head node. Set `num_cpus=0` to avoid tasks are being scheduled on a head node. RAY_scheduler_spread_threshold=0.0 ray start --head --num-cpus=0 # Worker node. RAY_scheduler_spread_threshold=0.0 ray start --address=[head-node-address] Out-of-Core Data Processing¶ Processing datasets larger than cluster memory is supported via Ray’s object spilling: if the in-memory object store is full, objects will be spilled to external storage (local disk by default). This feature is available but off by default in Ray 1.2, and is on by default in Ray 1.3+. Please see your Ray version’s object spilling documentation for steps to enable and/or configure object spilling. Persist¶ Dask-on-Ray patches dask.persist() in order to match Dask Distributed’s persist semantics <>; namely, calling dask.persist() with a Dask-on-Ray scheduler will submit the tasks to the Ray cluster and return Ray futures inlined in the Dask collection. This is nice if you wish to compute some base collection (such as a Dask array), followed by multiple different downstream computations (such as aggregations): those downstream computations will be faster since that base collection computation was kicked off early and referenced by all downstream computations, often via shared memory. import ray from ray.util.dask import ray_dask_get import dask import dask.array as da # Start Ray. # Tip: If connecting to an existing cluster, use ray.init(address="auto"). ray.init() # Set the scheduler to ray_dask_get in your config so you don't # have to specify it on each compute call. dask.config.set(scheduler=ray_dask_get) d_arr = da.ones(100) print(dask.base.collections_to_dsk([d_arr])) # {('ones-c345e6f8436ff9bcd68ddf25287d27f3', # 0): (functools.partial(<function _broadcast_trick_inner at 0x7f27f1a71f80>, # dtype=dtype('float64')), (5,))} # This submits all underlying Ray tasks to the cluster and returns # a Dask array with the Ray futures inlined. d_arr_p = d_arr.persist() # Notice that the Ray ObjectRef is inlined. The dask.ones() task has # been submitted to and is running on the Ray cluster. dask.base.collections_to_dsk([d_arr_p]) # {('ones-c345e6f8436ff9bcd68ddf25287d27f3', # 0): ObjectRef(8b4e50dc1ddac855ffffffffffffffffffffffff0100000001000000)} # Future computations on this persisted Dask Array will be fast since we # already started computing d_arr_p in the background. d_arr_p.sum().compute() d_arr_p.min().compute() d_arr_p.max().compute() ray.shutdown() Custom optimization for Dask DataFrame shuffling¶ Dask on Ray provides a Dask DataFrame optimizer that leverages Ray’s ability to execute multiple-return tasks in order to speed up shuffling by as much as 4x on Ray. Simply set the dataframe_optimize configuration option to our optimizer function, similar to how you specify the Dask-on-Ray scheduler: import ray from ray.util.dask import dataframe_optimize, ray_dask_get import dask import dask.dataframe as dd import numpy as np import pandas as pd # Start Ray. # Tip: If connecting to an existing cluster, use ray.init(address="auto"). ray.init() # Set the Dask DataFrame optimizer to # our custom optimization function, this time using the config setter as a # context manager. with dask.config.set( scheduler=ray_dask_get, dataframe_optimize=dataframe_optimize): npartitions = 100 df = dd.from_pandas( pd.DataFrame( np.random.randint(0, 100, size=(10000, 2)), columns=["age", "grade"]), npartitions=npartitions) # We set max_branch to infinity in order to ensure that the task-based # shuffle happens in a single stage, which is required in order for our # optimization to work. df.set_index( ["age"], shuffle="tasks", max_branch=float("inf")).head( 10, npartitions=-1) ray.shutdown() The existing Dask scheduler callbacks ( start, start_state, pretask, posttask, finish) are also available, which can be used to introspect the Dask task to Ray task conversion process, but note that the pretask and posttask hooks are executed before and after the Ray task is submitted, not executed, and that finish is executed after all Ray tasks have been submitted, not executed. This callback API is currently unstable and subject to change.
https://docs.ray.io/en/master/data/dask-on-ray.html
CC-MAIN-2022-05
refinedweb
1,255
55.44
For more information, see. Abstract Managing recipients in Microsoft® Exchange 2000 Server is significantly different from managing recipients in previous versions of Exchange. This release introduces new features, and integration with Microsoft Windows® Active Directory™ service means considerable changes. Windows user accounts and Exchange recipients are merged conceptually and managed through the same interface. The establishment of recipient policies and improvements in address lists provide further administrative flexibility. However, as with other components in Exchange, functionality is restricted in recipient management in mixed environments where previous versions of Exchange Server are present. This document helps you perform familiar tasks, leverage new features, and successfully maintain coexistence of Exchange 2000 Server and Exchange Server version 5.5. Introduction Definition of Exchange Recipients Managing Recipients Recipient Settings Recipient Policies Address Lists The recommendations made in this documentation are based on the assumption that you use the Microsoft Windows 2000 Active Directory Users and Computers snap-in to manage your recipients, and the System Manager snap-in in Microsoft Exchange 2000 to manage your Exchange servers. We discuss both native and mixed environments because many customers support Exchange Server version 5.5 in their organizations. However, we do not refer to versions older than 5.5. Exchange recipients are objects that have e-mail capabilities. This section explains the four types of Exchange 2000 recipients: users, contacts, groups, and public folders. Active Directory user accounts enable users to log on to computers and domains with identities that can be authenticated and authorized for access to domain resources. Users who log on to the network must have their own unique user accounts and passwords. User accounts also can be used as service accounts for some applications. Users can be added to groups and appear in the global address list (GAL). Note the difference between mail-enabled and mailbox-enabled users. A mailbox-enabled user is equivalent to an Exchange 5.5 user, while a mail-enabled user is equivalent to a custom recipient. If a user account is mail-enabled but not mailbox-enabled, the user can receive e-mail at an external e-mail address but cannot store messages on your Exchange server. Only recipients with Active Directory accounts can be mailbox-enabled to send and receive e-mail. You must install Exchange to mailbox-enable a user. You then can specify the location of the user's mailbox on the Exchange store. However, the Windows 2000 operating system has native Simple Mail Transfer Protocol (SMTP) service capabilities that enable you to specify external e-mail addresses for users even before Exchange is installed. A contact is an Active Directory object that does not have permissions to access domain resources. A contact usually represents someone outside your Exchange organization, such as a partner or a customer. Contacts cannot be given mailboxes on your Exchange server. However, you can specify external e-mail addresses for contacts and add them to groups and GAL. A contact is equivalent to a custom recipient in Exchange 5.5. A group is an Active Directory object that can contain users, contacts, public folders, and other groups. There are two main types of groups: security groups and distribution groups. Security groups are used to collect objects into a manageable unit for controlling access to resources; they can be mail-enabled. Distribution groups are used only as e-mail distribution lists. You can break down groups further into domain local, global, and universal groups. Each group represents a different scope for membership criteria and access permissions. When the forest is in native Windows 2000 mode, you can freely convert between security and distribution group types, and change the scope of the groups. When the forest is in mixed mode, however, conversion is not allowed and only distribution groups can have universal scope. It is not necessary to have a detailed understanding of all the group types. However, it is recommended that you read the Windows 2000 documentation to become familiar with all the different varieties. Note: Only the membership of a universal group is replicated across domains. This design minimizes replication traffic, but it means you must configure your distribution lists as mail-enabled groups of universal scope. Otherwise, if you use a domain local or global group, you must specify an expansion server in the same domain as this group. This ensures proper membership expansion for delivery of e-mail sent to the group. When Exchange 5.5 distribution lists are replicated to Active Directory, they are converted to universal security groups. A public folder is an Exchange-specific object that stores messages or information that can be shared among users in your organization. Unlike users and contacts, which are native Windows objects, public folders only appear in Active Directory if you mail-enable them. In a mixed environment, all the Messaging Application Programming Interface (MAPI) folders are mail-enabled by default. This is to make them compatible with Exchange 5.5, in which all the public folders had an associated directory object. However, Exchange 2000 provides the capability for unlimited creation of general-purpose public folder trees. Folders under these trees are not mail-enabled by default. In addition, in a native environment, no public folders are mail-enabled by default. Mail-enabled public folders can be displayed in GAL and can be added to groups. For example, you may want to include a public folder in a group to archive all the messages sent to that group. Key concepts of this section are these: Mailboxes are equivalent to Exchange 5.5 users. Mail-enabled users and contacts are equivalent to Exchange 5.5 custom recipients. MAPI public folders are mail-enabled by default in a mixed environment; general-purpose folders are not. In a pure Exchange 2000 environment, public folders are not mail-enabled by default, thus there are no directory objects for them. Only universal group membership will be replicated across domains. When Exchange 2000 Server is installed, it extends the Active Directory Users and Computers snap-in with additional tabs on the Properties page and with right-click menu items. Although you can manage recipients from previous versions of Exchange, it is highly recommended that you only use Active Directory Users and Computers for consistency and simplicity. Using Active Directory Users and Computers enables you to combine the administration of Windows user accounts with that of Exchange recipients. However, note that system administration tasks are performed by using a completely separate interface. This section identifies some Active Directory concepts that are essential to organizing and managing Exchange recipients. This documentation reviews the process of mail-enabling and mailbox-enabling users, and discusses administrative roles and boundaries. A domain is a grouping of network objects such as users, groups, and computers. All objects in a domain are stored in Active Directory. Domains can be nested in a hierarchical structure called a domain tree. Multiple domain trees form a forest. Each domain is an administrative boundary in that security policies and settings - such as administrative rights, security policies, and access control lists (ACLs) - do not cross from one domain to another. The administrator of a particular domain has rights to set policies in that domain only. Different administrators can create and manage different domains in an organization. Note, however, that a domain in Active Directory is not a security boundary that guarantees isolation from other domains or domain owners within the same forest. Only a forest constitutes such a security boundary. Each domain may have one or more domain controllers, and changes made to directory objects are written to a domain controller. Your organization may have multiple domains that make up a forest. Active Directory replicates across domains in order to keep all the information synchronized. A special type of domain controller hosts the Global Catalog, which contains a full replica of all objects in the directory for its own domain and a partial replica of all objects contained in the directory of every other domain in the forest. For more information about domains, forests, domain controllers, and global catalog servers, refer to the Windows documentation. Organizational units are Active Directory containers in which you can place users, groups, computers, and other organizational units. An organizational unit cannot contain objects from other domains. An organizational unit is the smallest scope or unit to which you can delegate administrative authority. Organizational units can be used to create hierarchies in a domain that follow the structure of your organization, providing a logical and intuitive administrative model. A user can be granted administrative authority for all organizational units in a domain or for a single organizational unit. An administrator of an organizational unit does not need to have administrative authority for any other organizational units in the domain. Now that you have learned about the basic containers for organizing recipients, it is time to consider the role of recipient manager. There are many permissions (access to objects) and rights (granted to users) that can be configured to implement a security model. Fortunately, the management of Exchange recipients only deals with a relatively small subset of these permissions and rights. Depending on the size and structure of your organization, the person who manages user accounts may or may not be the same person who manages your Exchange servers. In case of the latter, the user manager needs to have appropriate permissions to Exchange as well. This ensures that recipients can be associated with an Exchange server and mailbox store for proper mail delivery. At a minimum, the recipient manager is required to have view-only permission on the administrative group container in order to create e-mail addresses or mailboxes. For more information about the Exchange permission model, see the Exchange 2000 Server documentation. You must put administrators into security groups and assign proper permissions to each group. As mentioned previously, security groups are used to collect users and other groups into manageable units. Each account added to a group receives the rights and permissions defined for that group. The permissions are assigned once to the group, instead of several times to each individual user. For example, suppose that the recipients are organized into organizational units that correspond to the various departments in your organization: Marketing, Sales, and Development. You would like Joe to manage the Marketing and Sales departments. The Development team is larger, so both Mary and Alice manage it. You can create three security groups, Marketing Admin, Sales Admin, and Dev Admin. You should include Joe in the membership of the first two, Mary and Alice on the last one. Use the delegation tool in Active Directory Users and Computers to give each of these groups rights on the appropriate organizational unit. Next suppose that the Sales team suddenly triples in size and you decide to hire Bob and John to manage the new larger Sales staff, and let Joe concentrate on managing the Marketing team. Instead of modifying permissions on John, Bob, and Joe to accommodate their new roles, simply add John and Bob to the Sales Admin group and remove Joe from it. Group permissions One of the most common and basic tasks you will perform is the creation of mailboxes. In the right-hand pane of the Active Directory Users and Computers snap-in, right-click the recipient. Then click Exchange Tasks. This launches a simple wizard that enables you to perform almost all of the Exchange-related tasks applicable to that recipient. Choose Create Mailbox and proceed until completion. Exchange Tasks Creating a mailbox automatically establishes e-mail proxies for the recipient. However, you will not see the proxies in the results pane right away. This is because proxies are generated by the Recipient Update Service (RUS), which runs at customized intervals. Even if the service is set to Always Run, there is still a short latency, usually less than a minute, before proxy generation. The actual mailbox is not created on the Exchange store database until the user logs on for the first time. Note that a user cannot log on without e-mail addresses (that is, SMTP, X.400, and so forth). Therefore, even after being mailbox-enabled, a user must wait until RUS has processed the account before trying to log on. In addition, after Exchange is installed, any new user is created with a mailbox by default. You can clear the Create An Exchange Mailbox check box in the wizard if you do not want a user to have a mailbox. You can use Exchange Task Wizard to delete a mailbox. Deleting a mailbox will permanently remove all the messages contained in that mailbox. Note that when you delete a user account without explicitly deleting the mailbox first, the mailbox will be marked for deletion. It will be deleted when the mailbox cleanup procedure is performed, either manually or when the deletion setting limit has been reached for that database. Until then, the mailbox remains on the Exchange store and can be recovered if needed. You may need to move a user's mailbox from one server or mailbox store to another on occasion. For example, if you replace hardware, you must move all users' mailboxes from the old server to the new server. To do this, make sure that the proper mailbox stores have been created on the new server and that you have permissions to access these stores. You then can use Exchange Task Wizard to move mailboxes. You can use Exchange Task Wizard to assign external e-mail addresses to users, groups, and contacts. This allows the recipients to be seen in GAL, from which their properties are easily viewable and they can be easily contacted. As mentioned in previous sections, users, contacts, and groups that have only e-mail addresses cannot store their e-mail messages on one of your Exchange servers. To create mailbox-enabled or mail-enabled users, contacts, or groups, you must have at least view-only permissions on administrative groups. If you use groups to assign permissions to an object, controlling access is as simple as maintaining group membership. A newly created user does not have a proxy until the Recipient Update Service has processed it; therefore, the user cannot log on right away. The physical mailbox for the user is only created after he or she logs on for the first time. You can edit various Exchange-related settings on the recipient's Properties page. The Exchange General, Exchange Features, and E-mail Addresses tabs are shown by default. In Advanced mode, you also can view and modify the Exchange Advanced tab. In addition, many of these settings can be configured on a larger scale by using System Manager. You can configure recipient properties individually but will probably only occasionally need to do so. The following list describes some of the Exchange recipient settings you may want to configure: E-mail addresses. These are the various proxy addresses at which a recipient can receive e-mail. The primary e-mail address, which is displayed in bold, is the one that appears in the From field of outgoing messages. If there is more than one proxy of the same type, you should specify which one is used as the primary e-mail address. You cannot delete a primary e-mail address. By default, you must have at least one SMTP address and one X.400 address. Typically, e-mail addresses are defined through recipient policies, but you also can modify them on an individual basis. Message format. This is the format of SMTP e-mail messages sent by users. This is set at an organization-wide level in Internet Message Formats under the Global Settings node in System Manager. You may want to configure specific formats for certain domains if, for example, some of your company's partners use older messaging clients that do not support Multipurpose Internet Mail Extensions (MIME). From the same location, you can control which automatic replies can be sent to that SMTP domain. For example, you may not want out-of-office replies to be sent outside your organization, because you may not want external parties to know that a user is out of town. Delivery restrictions. These restrictions define outgoing and incoming message size limit, in addition to which users in your organization can send e-mail to a particular user. You can set delivery restrictions at an organization-wide level in Message Delivery under the Global Settings node in System Manager, where you can also block inbound messages from external sources. Delivery options. These options allow others to send e-mail on behalf of the user and to automatically forward the e-mail messages to another recipient's account. Storage limits. These are the limits at which users receive warnings or become unable to compose or receive new messages. You can set these at a higher level as properties of the mailbox store. The recipient settings will default to that of the database on which the user's mailbox is located. You may want to change the default for a special-case user. Protocol settings. These settings enable various protocols for the recipient (HTTP, IMAP4, and POP3). For protocols such as IMAP4 and POP3, you can also specify message formats. Hide from Address Lists. This option hides the recipient from GAL so that other users in the organization cannot see the recipient or the recipient's properties. Note that Exchange Administrator accounts and Exchange Server accounts can access the hidden properties. Hidden Group Membership. This option hides the membership of a mail-enabled group so that e-mail can be sent to the group but users cannot view the membership. You also can configure this property in Exchange Task Wizard. Again, Exchange Administrators and Exchange Server accounts can access the hidden membership. These are the formats you can use: Display Name Format. This is the format in which the user's name is displayed in GAL. By default, Windows has set the form of <first name> <last name>. It is possible to change this default by using Active Directory Services Interface (ADSI) edit. E-mail Address Format. This is the format of the user's e-mail address. By default, the SMTP address is alias@yourcompany.com. If you want to arrange the format differently, you can create a company-wide recipient policy and use a combination of %g(first name), %s(last name), %i(middle initial), and so on, to specify the e-mail address format. Recipient settings should be configured at a high level - under Global Settings in System Manager, or through recipient or system policies. Exchange Server and Administrator accounts have access to objects hidden from address lists and hidden memberships. A recipient policy is a collection of settings that can be applied to a select set of Exchange recipients. With Exchange 2000, you can use recipient policies to set up e-mail addresses and allow your Exchange system to accept inbound messages targeted to all these addresses. In particular, SMTP addresses on recipient policies are used to specify the set of SMTP domains (for example, company1.com, company2.com) for which your Exchange organization will accept e-mail. The Exchange 2000 routing engine uses these SMTP addresses to define the local domains. This is a useful feature if your organization needs more than just one default e-mail domain - for example, if you want to distinguish multiple divisions or branch offices in different geographical regions. RUS processes each policy and recipient, and automatically generates e-mail addresses for recipients defined in each policy. However, recipients in a mixed site that also has Exchange 5.5 servers cannot take advantage of this feature because this legacy version of Exchange cannot handle multiple e-mail addresses. There are two types of Recipient Management Services. One is responsible for processing Exchange system objects in the Configuration container, and the other is responsible for processing recipients in each domain. There is only one of the first type in the entire Windows forest because there is only one Configuration container. The second type of RUS can have multiple instances, even in the same domain. This section focuses on the second type of RUS, which is responsible for processing recipients in each domain, because it applies recipient policies. You must have at least one RUS for each domain in your organization, and it must be run from an Exchange 2000 server. For domains that do not contain any Exchange 2000 servers, the RUS you create needs to be run from an Exchange 2000 server outside the domain. Each RUS must read from and write to a unique domain controller. If there are multiple domain controllers in a domain, then there can be multiple RUSs running in the same domain. Setting up multiple RUSs is desirable in order to overcome network latency issues when the domain spans several sites and connection between sites is slow. For example, if you have a site in Seattle and another site in Beijing, there may be a long delay before a mailbox created in Beijing is processed by the RUS in Seattle. As a result, e-mail address generation will be slow and the user must wait a long time before being able to log on to the mailbox (recall that a user cannot log on without a proxy). Ideally, you set up a RUS in Beijing that can use the domain controller in the local site. You can schedule appropriate intervals to run this service. Make sure that the interval is long enough to allow the update task to complete before the next update begins. You may need to run a few tests to find out how long the update service takes to complete its task. True to the term "update," RUS processes only changes made since the last time it was run, which makes it efficient. The changes could have come from policies or recipients. RUS update interval You can manually force a RUS to run by choosing Update Now or Rebuild from the right-click options. Update Now shortens the interval before the next update, and RUS still updates only the changes. Rebuild forces a complete processing through all the recipient policies and recipients, even those that have not changed. For this reason, rebuilding usually takes significantly longer than updating and should not be used often. Default policies cannot be modified or deleted. When you are in a mixed-mode environment, there is one default policy for each Exchange 5.5 site. These default policies are compatible with Exchange 5.5 site addressing and are the only policies that apply to recipients whose mailboxes are on servers in a mixed site. These recipients cannot receive e-mail at multiple e-mail addresses. In a pure Exchange 2000 organization, a single default recipient policy automatically generates e-mail addresses for all the recipients in the organization. The default SMTP domain name is the Microsoft Windows NT® domain in which your Exchange organization resides. Although it cannot be modified or deleted, you can easily create new policies to override this default. To create a new recipient policy, you must first define to whom the policy will be applied. Exchange 2000 provides a Find command that allows you to select the desired set of recipients. This interface is based on a Lightweight Directory Access Protocol (LDAP) query, per RFC2254, that allows you to filter by any Active Directory attribute. For example, suppose that you want to give all your support engineers a more specific e-mail address. First, use the filter to specify that you want all users with a Department attribute equal to "Support." Exchange maps the requirements to an LDAP query, which is displayed in raw form on the General tab of a policy. You also can bypass the Find command and use the Custom Search option on the Advanced tab to type the query directly. Alternately, you can copy and paste an existing LDAP query and modify it. After the filter is created, you can define one or more e-mail addresses that apply to the recipients under the policy. In this example, your policy may include two SMTP e-mail addresses: alias@support.yourcompany.com and alias@yourcompany.com. This means your Exchange system will accept e-mail for these recipients at both addresses. In addition, as mentioned earlier, you can change the format of the left portion of the address. For example, %s.%g@yourcompany.com will result in the format of <last name>.<first name>@yourcompany.com. If a recipient falls under more than one policy, the highest priority policy takes effect. You can change the priority of the policies, except for the default policies. In addition, you can manually apply a policy instead of waiting for RUS to run at its scheduled interval. This would be useful if you change the policy and want the change to take effect as soon as possible. Applying a policy Note: If you want to apply a policy to all recipients - both existing recipients and any new recipients that are created - you must manually apply the policy and select the option of applying it to existing users. Otherwise, this policy will affect only newly created users. You can remove proxy addresses so that they no longer are assigned to recipients in one of three ways. However, none of these will remove proxy addresses from recipients that already have them. Disable the Proxy. If you clear the check box on a proxy you have previously defined, new recipients under the associated policy no longer will receive this proxy address. However, existing recipients will continue to receive e-mail at this address. Remove the Proxy. If you delete the proxy, existing users will still have this proxy address, but newly created users will not. Note that you cannot delete any primary proxy addresses. Remove the Policy. If you delete the policy object, the proxies that were previously assigned by this policy will not be removed from existing recipients. As mentioned previously, if you create a new policy that applies to this set of orphaned recipients, you need to apply the new policy manually. If you have configured multiple recipient policies, you will need to perform one of the following steps to accommodate access using the Web client: Create a corresponding HTTP virtual directory for each additional SMTP e-mail domain and point the directory at mailboxes for that domain. This allows Exchange to correctly map out the SMTP address for the user and find his or her mailbox. Add the common SMTP domain as a secondary SMTP address for all recipients. This allows the original "flat" namespace (the default SMTP domain) to still work for the client, and Exchange will be able to find the mailbox. Exchange 2000 recipient policy allows you to select a set of users based on an LDAP query and to configure multiple e-mail addresses for them. However, you cannot do this for sites that have Exchange 5.5 servers. You must have at least one RUS per domain, and it must run on an Exchange 2000 server. Each RUS must connect to a unique domain controller. RUS processes changes each time it runs. If you force a rebuild, RUS will process all objects, which can take a long time. Deleting a recipient policy or deleting or disabling a particular proxy does not remove the associated proxy from existing users. New policies are not, by default, applied to users previously under another policy. You may need to create an additional virtual directory for each unique SMTP domain defined by a recipient policy. Address lists help you organize the presentation of Exchange recipients to clients such as Microsoft Outlook®. Address lists can be used to address e-mail messages, choose meeting attendees, and look up locations and phone numbers of others in your organization. Exchange 2000 allows you to create address lists based on any attribute in Active Directory. The membership of each address list is maintained by RUS. Users can access GAL for a complete set of Exchange recipients or relevant subsets of it. In addition, offline address lists can be downloaded as a data resource when you do not have access to the corporate network. Exchange 2000 provides much more flexibility in customization of both the global and offline address lists. Exchange 2000 installs a collection of address lists during setup: All Contacts, All Groups, All Users, All Conferencing Resources, Public Folders, and Global Address List. These are available to every user in your organization by default. Note that, although the criteria of default address lists cannot be modified, they can be renamed or deleted. You can create customized address lists to meet your users' needs. These custom address lists can be created according to location, department, teams, or any other Active Directory attribute. As with recipient policies, the LDAP query-based filter is used to select the appropriate set of recipients that should be included in the address list. Once created, you can preview the membership of an address list. It may be a good idea to name the address list so that the name reflects its filter criteria - for example, North American Employees. You also can create empty address lists as a container for organizing address lists underneath it. For example, you might have an empty address list named Full-Time Employees, with address lists such as Engineers, Marketing Staff, Human Resource Staff, and so forth placed under it. When RUS runs, it processes changes made in address lists as well as changes in the directory. This is how the membership of the address list is kept current. In particular, when a new recipient is created, RUS sets the showInAddressBook attribute when it processes the recipient. This attribute points to all the address lists in which the recipient is included. Thus, RUS must finish updating before a new recipient will appear in address lists. In addition, when you hide a recipient from address lists, the ms-Exchange-Hide-From-Address-Lists flag is set on the recipient object. RUS will clear the showInAddressBook attribute, thus preventing the recipient from being viewed in address lists. When you unhide it, RUS reevaluates the attribute based on the filters of the current address lists. You may need to create multiple address lists if your organization has numerous locations, departments, product teams, and so forth. Exchange 2000 allows you to control which users can access these lists by exposing the access control editor on each address list object. On the Security tab of the address list, you can explicitly deny the Open Address List right to anyone who should not be able to access this particular address list. Exchange 2000 also provides the capability of configuring more than one GAL. However, the user only sees one GAL. The GAL that the client sees is determined by ordered evaluation: User has rights to open the GAL. User is a member of the GAL. Largest GAL out of those that remain. Exchange 2000 allows you to create multiple offline address lists, which are composed of any combination of existing address lists. Each mailbox store is associated with an offline address list. When users whose mailboxes are on that store connect to the Exchange server remotely, they can download offline address lists. They also can choose to download only updates that were made since the last download. Offline address lists are generated and stored on a specified server. Only Exchange 2000 servers can generate Exchange 2000 address lists, which means that users connected to servers running Exchange 5.5 must use the legacy version of the offline address book. You can schedule when offline address lists are updated. Therefore, if there are frequent changes, you can schedule updates to be made more often to keep the data current. You also can manually update the address list if, for example, you just made a change but your next scheduled update is not for some time. Exchange 2000 allows you to create address lists based on an LDAP query, and to build customized global and offline address lists. However, only Exchange 2000 servers can generate these new address lists. The showInAddressBook attribute on a recipient contains links to address lists to which the recipient belongs. You set and maintain this attribute through RUS. Control access to the address list by setting the Open Address List right on the Security tab. Recipmanage_pdf.exe 384 KB Portable Document file
http://technet.microsoft.com/en-us/library/cc750325.aspx
crawl-002
refinedweb
5,415
54.52
> calc.rar > integer.h /* integer.h */ #ifndef __INTEGER_H__ #define __INTEGER_H__ #include /* * If DEBUG is defined in the Makefile, the nett change in global variable * nettbytes is calculated at the end of the program. nettbytes should be * zero at program end if proper allocation and deallocation have been done. * If investigating a program crash due to suspected misfreeing, turn on the * Michael Forbes debugging device in int/i3.c by defining FORBES in Makefile. * (Should only be done on mainframes as it requires a large pointer table.) * Banking procedures are used to save on the mallocing, reallocing and freeing. * If DEBUG is not switched on, to save a little time, malloc, realloc and free * are used on the mainframe version, as space considerations are unnecessary. * However mmalloc and realloc1 are used in the PC version, as space is at a * premium and we would like an exit(1) to occur when no space can be allocated. */ #define MIN_ARRAY_SIZE 11 /* enough for slots 0 to 10 */ #define MAX_ARRAY_SIZE 65536 #ifdef DEBUG #define ffree(x,y) sfree(x,y,__FILE__,__LINE__) #define SAVEPI(x) ffree((char *)x, sizeof(struct TERMI)) #define SAVEPR(x) ffree((char *)x, sizeof(struct TERMR)) #define SAVEPCI(x) ffree((char *)x, sizeof(struct TERMCI)) #define SAVEPCR(x) ffree((char *)x, sizeof(struct TERMCR)) #define SAVEPm(x) ffree((char *)x, sizeof(struct TERMm)) #define rrealloc(x,y,z) realloc1(x,y,z) #define mmalloc(x) malloc1(x) #define ccalloc(x,y) calloc1(x,y) #else #define SAVEPI SAVEI #define SAVEPR SAVER #define SAVEPCI SAVECI #define SAVEPCR SAVECR #define SAVEPm SAVEm #define ffree(x,y) free(x) #ifdef _WIN32 #define rrealloc(x,y,z) realloc1(x,y,z) #define mmalloc(x) malloc1(x) #define ccalloc(x,y) calloc1(x,y) #else #define rrealloc(x,y,z) realloc(x,y) #define mmalloc(x) malloc(x) #define ccalloc(x,y) calloc(x,y) #endif #endif #define R0 65536 /* base = 2^16 = 65536 */ #define T0 16 /* used in int/i1.c and int/i2.c */ #define Z0 98 /* number of rows in array reciprocal in primes.h */ #define Y0 2048 /* number of primes in the array PRIME in primes.h */ #define O5 5 #define O32 32 #define O31 31 #define USL unsigned long #define USI unsigned int #define get_element(M, row, col) (M[row][col>>5] & ((USL)1<<(31-(col&31)))?1:0) #define elt(m, r, c) ((m)->V[(r) * (m)->C + (c)]) typedef struct _MPI { unsigned long *V; int S; /* S = -1,0,-1, with S=0 corresponding to D=0 and V[0]=0.*/ unsigned int D; /* length of array V - 1. */ struct _MPI *NEXT;/* only used for BANKING, when it's = NULL. */ } MPI; typedef struct { MPI *N; MPI *D; } MPR; /* Here gcd(*N, *D) = 1 and *D > 0. */ typedef struct { MPI *R; MPI *I; } MPCI; typedef struct { MPR *R; MPR *I; } MPCR; typedef struct { MPI ***V; unsigned int R; unsigned int C; } MPMATI; typedef struct { MPR **V; /* the vector way of presenting a matrix - used in CMAT */ unsigned int R; unsigned int C; } MPMATR; struct TERMI { int DEG; MPI *COEF; struct TERMI *NEXT; }; /* THE MPI array. Added by Sean Seefried */ typedef struct _MPIA { MPI **A; unsigned size; /* A defacto standard is 0 <= size <= 2^16-1 */ unsigned slots; /* The number of slots allocated. Sometimes different to * size. eg. when array built slots = MIN_ARRAY_SIZE * while size = 0 */ } *MPIA; typedef struct TERMI *POLYI; /* A variable of type POLYI is thus the head pointer to a linear linked-list * which represents a polynomial with MPI coefficients. * each structure in the linked list corresponds to a component monomial. * the zero polynomial is represented by the null pointer NULL. * Our constructions are based on the account in Scientific Pascal by * H. Flanders, pp. 175-189. */ #endif
http://read.pudn.com/downloads120/sourcecode/math/512532/calc/calc/CALC/integer.h__.htm
crawl-002
refinedweb
621
57.81
. Sunday’s local elections were important for two reasons. First, because the poll was held in the overwhelmingly Serbian-inhabited north of Kosovo whose population mostly rejects having anything to do with the rest of the overwhelmingly Albanian-populated rest of the country. And second, it gives a snapshot of the political landscape. The poll in the north was held amid threats and intimidation. The Serbian government pulled out all the stops to encourage people to vote. This was a purely utilitarian move: Serbia needs to be seen to be cooperative to secure green light from EU members to open accession talks by January. Directors of schools and other public institutions phoned teachers and employees telling them to vote if they wanted to be sure of their jobs. Boycott supporters played a far rougher game though. Gangs of men hung around outside polling stations shouting at those who went to vote and filming them. By late afternoon it seemed as if boycott supporters in north Mitrovica had secured a big victory. After masked men attacked the polling station, Marko Jaksic, a leading boycott campaigner, said that his people had nothing to do with the attack and blaming the Serbian police for being behind it. Whoever was behind the attack it has certainly saved north Mitrovica from producing an unworkable result. If the polling had not stopped then the north would have had an Albanian mayor, thanks to the votes of the small number of local Albanians. Before the election day, Hashim Thaci, the prime minister of Kosovo, and Ivica Dacic, the prime minister of Serbia, had been invited to go to Brussels on November 6th. Yet in spite of the outbreak of violence during the vote, it seems unlikely that the Brussels Agreement they signed in April on regulating relations between the two countries and within Kosovo will be derailed. Exactly how they will resolve the problem of the polls in the north, and which should be repeated, is unclear. In the rest of the country however the situation could not be more different. In Serbian-inhabited areas turnout was extraordinarily high. The party that had the best night was the opposition Democratic League of Kosovo, (LDK) which had been expected to do badly. The ruling Democratic Party of Kosovo (PDK) of Hashim Thaci also did better than expected, but lost votes compared to 2010. On Sunday night, as the LDK’s fireworks lit up the sky over Pristina, its pundits poured over the results in television studios. They showed, according to Florina Duli, director of the Kosovar Stability Initiative, a think-tank, that Kosovo Albanians are “not that much interested in what Serbs are doing”. And the PDK had not been punished for doing a deal with Serbia. Readers' comments Reader comments are listed below. Comments are currently closed and new comments are no longer being accepted. Sort: Kosovo was split from Serbia because its population wanted to separate. That makes obvious sense, and will eventually - when cooler heads prevail - also be recognized as just by most Serbs. But then why will the population of a few villages in Northern Kosovo not be allowed to be part of Serbia if that is the wish of an overwhelming majority? I may be naive, but I cannot discern any convincing reason. "The concern is that it may start a chain reaction all over the Balkans since there's ethnic minorities in every country." And there is no concern that Albanian minority split from Serbia (still illegal, by the way) would cause a chain reaction, "since there's ethnic minorities in every country?" "Wake up and stop being delusional, it's 2013 already! " so much for the principles, right? But who am I to question anything written by the 'Living legend'... You are right about what you are saying, however, the reason you are naive is that you need to be more informed when talking about these sensitive cases. Mitrovica is a town of Kosovo, which had been caused to split in two ethnic parts of the town because of Serbian militants and gangs. There are a lot of Kosovars living in that small portion of Kosovo, frightened and helpless. There are also a lot of other Serbians who are willing to be helped by the Kosovar government in order to live a proper life. The reason why things are as bad as they seem is because of some gangs, who are willingly causing this kind of violence just to remain uncontrolled by a government and lawless. The leaders of these gangs are involved in a lot of drug trafficking and prostitution and they don't want to lose business that easily. Moreover, Serbian authorities play a two-faced role, where in one hand try to please the EU by offering help and making people vote, but in the other hand, they still continue paying criminals to cause disruption. The ones who are actually suffering from this are the normal people, either be it Serbians or Kosovars, who just want peace and prosperity for their lives. The concern is that it may start a chain reaction all over the Balkans since there's ethnic minorities in every country. "C'mon, think about this a bit rationally. You are saying that the Albanians, a small minority in Yugoslavia, cleansed the Serbs, the main ethnic group of the country?! This makes no sense." Really? maybe some figures will help you get another perspective. Sure - I agree with you, it did not make sense, but it actually happened. And it explains the roots of Kosovo war. Ethnic groups in Kosovo Year Albanians Serbs Others 1939 60 % 34 % 5 % 1991 82.2 % 9.9 % 7.9 % 2000 88 % 7 % 5 % 2007 92 % 5 % 3 % "When Tito was in power Albanians wanted Kosovo to become 7th Republic within Yugoslavia, " Sure they did - by cleansing the Serb population from Kosovo. Wake up and stop being delusional, it's 2013 already! Your country is about to die economically. Even Montenegro didn't want to do anything with you. This 5-year old kid knows more: And by the way - 'technicality' you say... ...perhaps. Just explain why do you think the Court felt there was a need to clearly reconfirm territorial integrity of Serbia in its opinion? a technicality... ." The things are much more complex than you put in the two sentences. Serbs being cleansed now are not able to return, unlike those Albanians 'cleansed' by Milosevic were. And no, Tito did not give any autonomy unlike Milosevic - to start with, Tito was a Croat. Albanians had full powers which they used rather well to cleanse the Serbs from Kosovo during that time. Just compare the Serb figures in Kosovo before and after WWII. And no, ICJ did not rule that Kosovo independence was legal, so pls don't bullshit anymore. Great read, hopefully more interesting articles like this to come! " Can you technically explain this mf?" Not sure what mf stands for, but in any case - true, almost 1,000,000 refugees fled AFTER NATO air stikes on Serbia started. Not before. The numbers of displaced Serbs and Albanians before the NATO strikes were more or less equal. "Violence during 1998 forced about 350,000 persons to internal displacement, including 180,000 Kosovo Albanians" With one important thing to note - these 1,000,000 Albanians returned to their homes within a couple of weeks, unlike 200,000 Serbs who remain displaced up until present day. Any more facts you would like to challenge? or you will remain a self-proclaimed... (I just love it:) ...'living legend'... "Yes, it did. Quite explicitly actually:" Not really - and certainly did not 'explicitly' say it was legal; on the contrary, what even your article quotes, it stated that 'it was not illegal' - which does not make it legal though. See the difference between your comment and the headline of the article you quoted? What your article does not explain though, is that the ICJ opined that this issue is not the matter of international law, hence such document cannot be illegal under the scope of international law. Or in another words - when you cross the street on the red light, you do not violate any international law, hence your action under the scope of the international law is not illegal. However, the court quite explicitly confirms the territorial sovereignty of the then Yugoslavia, present Serbia, in the same opinion - as these issues are regulated by the international law: ICJ' Off your meds again, huh? C'mon, think about this a bit rationally. You are saying that the Albanians, a small minority in Yugoslavia, cleansed the Serbs, the main ethnic group of the country?! This makes no sense. "Albanians had full powers..." No they didn't. They weren't a republic, were they? And their autonomy decreased even more when Milosevic came to power. As for Serbs leaving Kosovo after the 1998/9 war, I agree, many of them did leave out of fear or were forced to leave. And they should return (if they want to), actually many have returned. They should regain their properties too. And it's Kosovo's government and Eulex responsibility to make sure this happens. "And no, ICJ did not rule that Kosovo independence was legal" Yes, it did. Quite explicitly actually: "It's different because they (Albanians in Kosovo) were being cleansed by Milosevic. That's why the secession didn't happen when Tito was in power, because he, unlike Milosevic, gave the minorities a lot of autonomy." When Tito was in power Albanians wanted Kosovo to become 7th Republic within Yugoslavia, and heavy handing of Kosovo after death of Tito by police and army pawed the way to unfortunate events. Indeed, 3 principle observations from Kosovo elections are: 1) They were free and fair - after 2009 electoral debacle, Kosovo had to prove it can deliver clean and fair polls. By and large, these were smooth and regular elections, closely monitored and declared regular by parties, observers, opposition and civil society. This is important element that will propel Kosovo in SAA process and will also help a bit Kosovo rating in the international community 2) HIgh turnout. High among Albanians, and high among 3/4 of Serbian areas. Even in north, in 3 municipalities there was a solid show of 20% voters in the voting places. This is not bad for a first ever vote, in midst of violent suppression of the vote and mixed signals from Serbia. In 3 places there was violence as seen on TV, but this location is also the epicentre of the criminal networks and desire could have been to provoke even greater backlash. 3) Constitutional integration of Kosovo is rapidly unfolding. New municipalities will enjoy the high degree of autonomy that is guaranteed by Ahtisaari, but the final arbiter in any legal dispute in the future is the Constitutional Court of Kosovo. Sooner the creation of municipalities and the subsequent Association, better for the beginning of the end of the Serbia's resistance to recognition of reality in Kosovo firstly, and Kosovo itself later. Yeah, maybe the population of the southern villages, or even the neighborhoods in whichever city of Kosova, where ethnic Serbs, form a majority should be allowed to declare independence, hell yeah, while we're at it, let them declare their own states, why even bother to become part of Serbia! Spot on.
http://www.economist.com/blogs/easternapproaches/2013/11/kosovo
CC-MAIN-2014-15
refinedweb
1,905
61.36
On 2/24/07, Paul D. Fernhout <pdfernhout at kurtz-fernhout.com> wrote: > kirby urner wrote: > > On 2/24/07, Paul D. Fernhout <pdfernhout at kurtz-fernhout.com> wrote: > > > >>There may be one major semantical issue, in terms of the meaning of side > >>effects when loading a module (e.g. defining singletons, opening files, > >>etc.) which is hard to deal with generically with Python. You can deal > >>with [it] specifically in how you write your own code, but that is not a > >>general solution. > > > > > > Not sure I follow yet. A module loads top to bottom, with lower defs > > premised on > > those previously mentioned. Is that what you mean? Once everything is loaded, > > it's more like a __dict__, i.e. the namespace of the module of > > accessible, either > > via dot notation, or directly if the names are top level. > > To step back for a minute, the fundamental problem here is that for > whatever reason a programmer wants to modify just one method of an already > loaded Python class (which came from a textual module which was loaded > already), save the change somewhere so it can be reloaded later > (overwriting part of the textual module?), and also have the program start > using the new behavior for existing instances without any other side > effects arising from recompiling this one change. In practice, this is > trivial to do in almost any Smalltalk system; it is hard if not impossible > to do in any widely used Python IDE or program (even when a Python shell > is embedded). > > Unfortunately, the paradigm used by every Python IDE I've tried is to > reload an entire textual module (or more typically, entire program) at a > time for even the slightest change to one function. Embedded Python shells > generally allow you to redefine a function if you have a copy of the code, > but they offer no way to save the code. Most Smalltalks uses a different > paradigm, where code is presented to the user one function at a time in a > browser and is compiled one function at a time. Yes, there are cases where > people "filein" Smalltalk code defining a complex program, but such > fileins are generally considered an *interchange* format, not a preferred > program representation for editing unlike as is usually the case with Python. > > Consider the meaning of an arbitrary piece of Python code near the bottom > of a textual module. Essentially, you have no idea what it means if the > original author has used some Python bells and whistles. For example, he > or she could have defined a metaclass where every nested "def" under a > class was converted to, say, an uppercase string and stored under a key > that was the numerical hash of the function name (with no functions > actually defined for that class perhaps). The specific metaclass behavior > may even hinge on the current state of a global which has been modified > several times during the course of loading the module. So essentially, you > have no way of knowing for sure what any apparent Python code really means > by isolated inspection. And because any module can run any arbitrary > Python code, without actually running the Python program (or doing the > equivalent analysis), you can never be sure what side effects loading a > module has. Now, Smalltalk has metaclasses too, but in practice, because > of the way code is presented to the user and edited and recompiled one > method/function at a time, the context makes fairly clear what is going to > happen when that snippet of code you just changed is compiled. The big > difference is really the effective unit of compilation -- the complex > module in Python or the simple method/function in Smalltalk. > > Now, this is rarely a problem the *first* time a module is loaded, but it > generally becomes a problem when a module is *reloaded*. If you only > treated as module as an *interchange* format, and then modified the live > classes using a tool which only works on regular classes (like PataPata > does), there is no need to reload the module, so this potential problem > related to parsing a modules meaning via an IDE tool remains only > potential, and also avoided is the possibility reloading a module might > have side effects. (In practice, anything still depends on mapping from a > function back to its source text, and this may go wrong for various > reasons... :-) > > Naturally, this kind of major redefinition is rarely done, and it would > create lots of confusion, but it is possible, so IDE tools that do not > support it are incomplete. This is a perrenial problem with, say, C, where > you can make all sorts of macros and so never know just exactly what > arbitrary C code out of context does (see the obfuscated code > contests...). And it means that you can't get a simple one-to-one mapping > of a section of a file that looks like it defines a function and an actual > function reliably without analyzing the entire program. Yes, 99.99% of the > time Python code does the obvious thing, but it is not 100% certain. The > same is true for Forth -- in theory any isolated snippet of Forth can mean > anything, since it is trivially easy to modify how the compiler interprets > text -- something that make Forth very powerful but at the same time > potentially very confusing for a code maintainer. I don't have the link > offhand, but a while back I came across a blog post suggesting you tend to > either have a powerful language or powerful tools -- but not at the same > time (except perhaps for Smalltalk :-). That is because if the language is > very flexible, it becomes almost impossible to write IDE tools that can > keep up with it in all its generality. > > Now, since almost all Python code is written in a straightforward manner, > one can still make such tools and find them useful. But likely there will > aways be gotchas in such systems as long as they tie their operation > closely to the notion of compiling one module at a time, compared to > Smalltalk which ties itself to compiling one method/function at a time. > > One of the things PataPata tried to do, and succeeded to some extent, was > breaking the link between reloading a textual module and modifying a > running Python program, yet it was still able to use a textual Python > module as both an interchange format and also an image format (something > even no Smalltalk has done to my knowledge, as all Smalltalk images I know > of are binary, not human editable text). > > One idea I have wanted to try for Python but never got around to it is to > create a Smalltalk-like browser and build and modify classes on the fly by > changing their objects and compiling only individual functions as they are > changed; I could store the textual representation of functions in a > repository with version control. Then, I could also still use Python > modules as an interchange format, sort of like PataPata did but without > prototypes. You would lose some of the generality of coding in Python > (setting globals in a module and such) but you would essentially have a > somewhat Smalltalk like environment to mess with (ignoring restarting from > exceptions, which is very important in Smalltalk development, where much > code ends up being written in the debugger as often as not; I'm not sure > whether that part could be simulated with plain Python or whether it would > require a VM change). > > --Paul Fernhout # xreload.py. """Alternative to reload(). This works by executing the module in a scratch namespace, and then patching classes, methods and functions. This avoids the need to patch instances. New objects are copied into the target namespace. """ import imp import sys import types def xreload(mod): """Reload a module in place, updating classes, methods and functions. Args: mod: a module object Returns: The (updated) input object itself. """ # Get the module name, e.g. 'foo.bar.whatever' modname = mod.__name__ # Get the module namespace (dict) early; this is part of the type check modns = mod.__dict__ # Parse it into package name and module name, e.g. 'foo.bar' and 'whatever' i = modname.rfind(".") if i >= 0: pkgname, modname = modname[:i], modname[i+1:] else: pkgname = None # Compute the search path if pkgname: # We're not reloading the package, only the module in it pkg = sys.modules[pkgname] path = pkg.__path__ # Search inside the package else: # Search the top-level module path pkg = None path = None # Make find_module() uses the default search path # Find the module; may raise ImportError (stream, filename, (suffix, mode, kind)) = imp.find_module(modname, path) # Turn it into a code object try: # Is it Python source code or byte code read from a file? # XXX Could handle frozen modules, zip-import modules if kind not in (imp.PY_COMPILED, imp.PY_SOURCE): # Fall back to built-in reload() return reload(mod) if kind == imp.PY_SOURCE: source = stream.read() code = compile(source, filename, "exec") else: code = marshal.load(stream) finally: if stream: stream.close() # Execute the code im a temporary namespace; if this fails, no changes tmpns = {} exec(code, tmpns) # Now we get to the hard part oldnames = set(modns) newnames = set(tmpns) # Add newly introduced names for name in newnames - oldnames: modns[name] = tmpns[name] # Delete names that are no longer current for name in oldnames - newnames - set(["__name__"]): del modns[name] # Now update the rest in place for name in oldnames & newnames: modns[name] = _update(modns[name], tmpns[name]) # Done! return mod def _update(oldobj, newobj): """Update oldobj, if possible in place, with newobj. If oldobj is immutable, this simply returns newobj. Args: oldobj: the object to be updated newobj: the object used as the source for the update Returns: either oldobj, updated in place, or newobj. """ if type(oldobj) is not type(newobj): # Cop-out: if the type changed, give up return newobj if hasattr(newobj, "__reload_update__"): # Provide a hook for updating return newobj.__reload_update__(oldobj) if isinstance(newobj, types.ClassType): return _update_class(oldobj, newobj) if isinstance(newobj, types.FunctionType): return _update_function(oldobj, newobj) if isinstance(newobj, types.MethodType): return _update_method(oldobj, newobj) # XXX Support class methods, static methods, other decorators # Not something we recognize, just give up return newobj def _update_function(oldfunc, newfunc): """Update a function object.""" oldfunc.__doc__ = newfunc.__doc__ oldfunc.__dict__.update(newfunc.__dict__) oldfunc.func_code = newfunc.func_code oldfunc.func_defaults = newfunc.func_defaults # XXX What else? return oldfunc def _update_method(oldmeth, newmeth): """Update a method object.""" # XXX What if im_func is not a function? _update_function(oldmeth.im_func, newmeth.im_func) return oldmeth def _update_class(oldclass, newclass): """Update a class object.""" # XXX What about __slots__? olddict = oldclass.__dict__ newdict = newclass.__dict__ oldnames = set(olddict) newnames = set(newdict) for name in newnames - oldnames: setattr(oldclass, name, newdict[name]) for name in oldnames - newnames: delattr(oldclass, name) for name in oldnames & newnames - set(["__dict__", "__doc__"]): setattr(oldclass, name, newdict[name]) return oldclass -- --Guido van Rossum (home page:)
http://mail.python.org/pipermail/edu-sig/2007-February/007787.html
crawl-002
refinedweb
1,815
58.52
- Harvey Harrison authored A few general categories: 1) ieee80211_has_* tests if particular fctl bits are set, the helpers are de in the same order as the fctl defines: A combined _has_a4 was also added to test when both FROMDS and TODS are set. 2) ieee80211_is_* is meant to test whether the frame control is of a certain ftype - data, mgmt, ctl, and two special helpers _is_data_qos, _is_data_pres which also test a subset of the stype space. When testing for a particular stype applicable only to one ftype, functions like ieee80211_is_ack have been added. Note that the ftype is also being checked in these helpers. They have been added for all mgmt and ctl stypes in the same order as the STYPE defines. 3) ieee80211_get_* is meant to take a struct ieee80211_hdr * and returns a pointer to somewhere in the struct, see get_SA, get_DA, get_qos_ctl. The intel wireless drivers had helpers that used this namespace, convert the all to use the new helpers and remove the byteshifting as they were defined in cpu-order rather than little-endian. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>fd7c8a40
https://gitlab.flux.utah.edu/xcap/xcap-capability-linux/-/blob/fd7c8a40b2a63863f749e4d17f0d94d2e5ab1331/drivers/net/wireless/iwlwifi/iwl-3945.c
CC-MAIN-2021-17
refinedweb
196
67.69
Moment.js + TypeScript. Cannot find name moment I develope web application with knockout in VisualStudio. I just installed knockout via bower, included d.ts file in project, included script to html page and now i can get access to ko. Now i try to use moment.js. Like with knockout: install, include d.ts, include script to page and i get an error cannot find name 'moment'. Adding reference to d.ts does not help, import * as moment from 'moment' get an error can not find module moment. I know that it's a stupid problem, but i can't fix it. What am i doing wrong? What I'd recommend is using some tool for managing your definitions. Some popular options (you don't need both, just pick one): tsd- npm i -g tsd typings- npm i -g typings These work in a similar fashion as package managers. You can install your definitions like npm/bower installs your dependencies. Once you have one of these installed, go to your project and install moment + its definition npm install moment --save And one of these: tsd install moment --save typings install moment --save --ambient Both of these will create a folder with your definitions in it (both call it typings), and both have an "umbrella" definition file in it, which you should reference in the entry point of your application (first is for tsd, second for typings): /// <reference path="typings/tsd.d.ts" /> /// <reference path="typings/index.d.ts" /> After this is done, you can use moment (or any other module) as you would: import * as moment from 'moment' moment.isDate("I'm not a date") I suggest checking out these: Moment.js + TypeScript. Cannot find name moment, What I'd recommend is using some tool for managing your definitions. Some popular options (you don't need both, just pick one):. tsd - npm i -g $ echo ' import * as moment from "moment"; const now = moment(); ' > test2.ts $ tsc --module commonjs test2.js Unfortunately, my project does not use a bundler or module system, so I'm stuck with "module": "none" for now. In my case, finally get this error solved by doing the following: - Adding this option "allowSyntheticDefaultImports": true,in the compilerOptionssection of the tsconfig.jsonfile (EDITED: As the moment doc. sais in the Note: If you have trouble importing moment, try add "allowSyntheticDefaultImports": true in compilerOptions in your tsconfig.json file.) - Also add "moduleResolution": "node"in the same compilerOptionssection. (Found this option looking around the web) - Importing moment module like this import * as moment from 'moment'; Using Moment with Angular and TypeScript, ts(6,30): error TS2304: Cannot find name 'moment'. Try using a /// compiler directive. To fix this, I tried adding the following /// compiler directive: Teams. Q&A for Work. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. This seems to solve the problem for me (the other solutions in this thread didn't work): import * as moment from 'moment/moment.js'; I still seem to be getting the correct typescript definitions even referencing the js file directly. I suspect the reason is that the 'moment' folder under node_modules does not have an index.js file but I'm not a typescript typings/compiler expert. [TypeScript] "error TS2304: Cannot find name 'moment'", Ask questions[TypeScript] "error TS2304: Cannot find name 'moment'" when used with "module":"none". Typescript seems to not be able to resolve the global Hi everybody, Iam new in using Typescript and angular. First of all i want to use moment.js and jquery i made a reference path in my bootstrap.v3.datetimepicker.d.ts and in my personal ts file. Linked to jquery and moment, and included t Try adding the below code, it worked for me import * as MomentD from "node_modules/moment/moment.d"; angular-moment, angular-moment. AngularJS directive and filters for Moment.JS. able to use it unless you override moment using Angular's dependency injection See Resolved Issue timezone: 'Name of Timezone' // e.g. 'Europe/London'. As of version 2.13.0, Moment includes a typescript definition file. So if you are using this version you can do following: import * as moment from 'moment'; let now = moment().format('LLLL'); Note: If you have trouble importing moment, try add "allowSyntheticDefaultImports": true in compilerOptions in your tsconfig.json file. angular2-moment, npm install --save angular2-moment. If you use typescript 1.8, and typings, you may also need to install typings for moment.js: typings install Moment typescript library uses `void` type as parameter input Breaking change Discussion TypeScript #4073 opened Jul 18, 2017 by chrisleck 8 Microsoft/TypeScript, hmm nothing change error TS2304: Cannot find name 'moment'. I am needing typescript bindings for browsers like in lib.d.ts and related typescript: 2.1.4 moment: 2.17.1. Notes: Declaration file from the typings works well, and moment it's the only package why I'm still using typings in my project. I checked what the difference is and difference is how they export moment. Instead of export moment they use declare module "moment" { export moment; } 👍 Cannot find name 'Moment', To unsubscribe from this group and stop receiving emails from it, send an email to jhipster-dev@googlegroups.com. To post to this group, send TypeScript supported modules from 1.5+ (besides is anyone is even still using older versions?). For those working on older projects that cannot target ES6 or use module loaders this would be most useful.
http://thetopsites.net/article/50151093.shtml
CC-MAIN-2020-34
refinedweb
918
57.47
My Question shouldn't be too hard to answer, The problem im having is im not sure how to scrape a website for specific keywords.. I'm quite new to Python.. So i know i need to add in some more details , Firstly what i dont want to do is use Beautiful Soup or any of those libs, im using lxml and requests, What i do want to do is ask the user for an input for a website and once its provided , Send a request to the provided URL, once the request is made i want it to grab all the html which i believe ive done using html.fromstring(site.content) so all thats been done the problem im having is i want it to find any link or text with the ending '.swf' and print it below that.. Anyone know any way of doing this? def ScrapeSwf(): flashSite = raw_input('Please Provide Web URL : ') print 'Sending Requests...' flashReq = requests.get(flashSite) print 'Scraping...' flashTree = html.fromstring(flashReq.content) print ' Now i want to search the html for the swf link in the html' print ' And Display them using print probablly with a while condition' Here goes my attempt: import requests [1] response = requests.get(flashSite) [2] myPage = response.content [3] for line in myPage.splitlines(): [4] if '.swf' in line: [5] start = line.find('http') [6] end = line.find('.swf') + 4 [7] print line[start:end] [8] Explanation: 1: Import the request module. I couldn't really figure out a way to get what I needed out of lxml, so I just stuck with this. 2: Send a HTTP GET method to whatever site that has the Flash file 3: Save its contents to a variable Yes, I realize you could condense lines 2 and 3, I just did it this way because I felt it makes a bit more sense to me. 4: Now iterating through each line in the code, going line by line. 5: Check to see if '.swf' is in that line Lines 6 through 8 demonstrate the string slicing method that @GazDavidson mentioned in his answer. The reason I add 4 in line 7 is because '.swf' is 4 characters long. You should be able to (roughly) get the result that provides a link to the SWF file.
https://codedump.io/share/N7rlebUXFHG3/1/python--web-scraping-specific-keywords
CC-MAIN-2017-30
refinedweb
385
80.82
This is a page from my book, Functional Programming, Simplified: scala> for { | i <- ints | } yield i*2 <console>:15: error: value map is not a member of Sequence[Int] i <- ints ^ Sadly, Sequence won’t currently work with for/ yield, but again the REPL tells us why: error: value map is not a member of Sequence[Int] That error tells us that Sequence needs a map method for this to work. Great — let’s create one. Adding a map method to Sequence Again I’m going to cheat to create a simple solution, this time using ArrayBuffer’s map method inside Sequence’s map method: def map[B](f: A => B): Sequence[B] = { val abMap: ArrayBuffer[B] = elems.map(f) Sequence(abMap: _*) } This map method does the following: - It takes a function input parameter that transforms a type Ato a type B. - When it’s finished, mapreturns a Sequence[B]. - In the first line of the function I show abMap: ArrayBuffer[B]to be clear that elems.map(f)returns an ArrayBuffer. As usual, showing the type isn’t necessary, but I think it helps to make this step clear. - In the second line inside the function I use the :_*syntax to create a new Sequenceand return it. About the :_* syntax If you haven’t seen the abMap: _* syntax before, the :_* part of the code is a way to adapt a collection to work with a varargs parameter. Recall that the Sequence constructor is defined to take a varags parameter: For more information on this syntax, see my information on Scala’s missing splat operator. The complete Sequence class This is what the Sequence class looks like when I add the map method to it: case class Sequence[A](initialElems: A*) { private val elems = scala.collection.mutable.ArrayBuffer[A]() // initialize elems ++= initialElems def map[B](f: A => B): Sequence[B] = { val abMap = elems.map(f) new Sequence(abMap: _*) } def foreach(block: A => Unit): Unit = { elems.foreach(block) } } Does for/yield work now? Now when I go back and try to use the for/ yield expression I showed earlier, I find that it compiles and runs just fine: scala> val ints = Sequence(1,2,3) ints: Sequence[Int] = Sequence(WrappedArray(1, 2, 3)) scala> for { | i <- ints | } yield i*2 res0: Sequence[Int] = Sequence(ArrayBuffer(2, 4, 6)) An important point One point I need to make clear is that this for/ yield expression works solely because of the map method; it has nothing to do with the foreach method. You can demonstrate this in at least two ways. First, if you remove the foreach method from the Sequence class you’ll see that this for expression still works. Second, if you create a little test class with this code in it, and then compile it with scalac -Xprint:parse, you’ll see that the Scala compiler converts this for expression: for { i <- ints } yield i*2 into this map expression: ints.map(((i) => i.$times(2))) To be very clear, creating a foreach in Sequence enables this for loop: for (i <- ints) println(i) and defining a map method in Sequence enables this for expression: for { i <- ints } yield i*2 Summary I can summarize what I accomplished in this lesson and the previous lesson with these lines of code: // (1) works because `foreach` is defined for (p <- peeps) println(p) // (2) `yield` works because `map` is defined val res: Sequence[Int] = for { i <- ints } yield i * 2 res.foreach(println) // verify the result What’s next? This is a good start. Next up, I’ll modify Sequence so I can use it with filtering clauses in for expressions.
https://alvinalexander.com/scala/fp-book/how-to-make-class-work-generator-in-for-expression
CC-MAIN-2019-26
refinedweb
615
64.34
counting_set 0.1.1 counting_set: ^0.1.1 copied to clipboard Counting Set # A Dart package providing a set that requires an element to be removed as many times as it was added. When an item that already is in the set is added, it replaces the existing value. This is useful for sets with custom equality functions. Usage # import 'package:counting_set/counting_set.dart'; void main() { final set = CountingHashSet<String>(); set.add('example'); set.add('example'); set.remove('example'); print(set.contains('example')); // true set.remove('example'); print(set.contains('example')); // false } Notes # CountingHashSetimplements Set, but that does not mean it can be used with all code that expects a regular Set. Be careful. toSetwill generate a normal Setif need be. License # MIT License Copyright (c) 2021 hacker1024.
https://pub.dev/packages/counting_set
CC-MAIN-2022-40
refinedweb
128
52.26
/** + <Enter> in the open editor as the first step. Of course, the Javadoc Search and the Show Javadoc were adapted to the new model as well. As the next step we are going to implement code completion inside javadoc, javadoc formatter, import management integration and some kind of batch processing. The last mentioned feature is the most wanted according to comments and votes of issue #111463. The idea is to create an analyzer tool that would take a scope (project/package) where to run and a set of editor hints, not just javadoc, involved in analysis. You should be able to keep several sets of involved hints. As a consequence you get an overview not only about broken Javadocs, but also about unused imports, wrong code patterns, and so on. It is unclear yet if it will be a new tool or some kind of the Task List integration.
http://wiki.netbeans.org/ACTool
CC-MAIN-2018-39
refinedweb
149
69.62
Extension methods are a feature of C# 3.0 that is just one of those features which can be easily overlooked. A lot of .NET old-timers, like myself, who are particularly kept busy on a daily basis trying to meet a project deadline or build the next biggest, baddest Line of Business application (wow.. that just sounds soo sexy doesn’t it?) will often fall back on old practices. .NET has trained me to often create a number of helper classes, such as a StringValidationHelper etc., in order to do things like validating user-entered data. So for instance, I could have a class that looks like the following: And here’s how you use it: Nothing too exciting here so far… But watch out! Here come Extension Methods! With Extension Methods, I can change add a method onto the string class itself. And it doesn’t matter if the class is sealed! Here’s my adjusted StringHelper class: Notice that all I had to do was add the keyword this before the string data argument. Now I can use the IsValidPhoneNumber method directly from my string variable: Ah.. that’s more like it. This code now looks a lot less cluttered. Of course this is a contrived example but nevertheless illustrates how easy it is to use this very cool feature of C#. Some of you who have been using LINQ may not realize that LINQ itself heavily uses Extension Methods, particularly on the IEnumerable interface. This occurs when you import the System.Linq namespace. Try it out – look at the IEnumerable interface intellisense before and after you import the System.Linq namespace.
http://blogs.msdn.com/b/mpapas/archive/2009/05/15/extension-methods-are-wonderful.aspx
CC-MAIN-2015-06
refinedweb
275
74.9
Archives XHTML and Accessibility in ASP.NET Whidbey Folks. Performance Tuning and ASP.NET Whidbey We are now in the final mini-milestone of Whidbey feature development -- which is the time when we start to really push hard on performance, even if it means cutting features to hit the necessary performance goals (which is one of the hardest things to go through when shipping a product -- and thankfully one that we haven't had to-do yet with Whidbey). ASP.NET Cross Page Postback to Different Applications Now Implemented Paul raised some concerns () about the inability to use the new ASP.NET Whidbey cross page postback functionality to pages that live on other servers and other applications. The good news is that it will work in the beta thanks to Ting's checkin yesterday. Legomation and Stop Motion Videos built with Web Cams I read an article in the November edition of Wired magazine on how you can build stop-motion animation films using Legos and fairly cheap Web Cams (many of which -- including the Logitech Quickcam -- come with built-in stop motion animation software). ASP.NET Whidbey Hands On Labs Online for Everyone I just saw this post about the Hands On Labs (HOL) for PDC: My PDC Keynote Demo I was forwarded a few pictures from my PDC keynote demo today. Keynote demos are always “fun” events, and are great at creating a lot of stress in a compressed period of time. Fun ASP.NET Whidbey Tip/Trick: DataBinding to Generics One of the great new features in Whidbey are "Generics" -- which basically provide a mechanism that enables developers to build classes whose signature and internal datatypes can be templatized. For example, rather than use an ArrayList (which is a collection of type Object), or force developers to create their own strongly typed list collection class (ie: the OrderCollection class) -- developers using Whidbey can use the new List class implemented within the System.Collections.Generic namespace, and specifically specify the type of the collection when using or referencing it. For example: // Use the built-in "List" collection within the System.Collections.Generic namespace // to create a collection of type "Order" List<Order> orders = new List<Order>(); // Add Order objects into the list orders.Add(new Order(123, "Dell")); orders.Add(new Order(345, "Toshiba")); orders.Add(new Order(567, "Compaq")); // Lookup the "OrderId" of the first item in the list -- note that there is no cast below, // because the collection items are each an "Order" object (as opposed to "Object" // which they would be with an ArrayList int orderId = orders[0].OrderId // The below statement will generate a compile error, but would have // compiled (but generated a runtime exception) if the collection was // an ArrayList orders.Add("This will not work because it isn't an order object"); -------------------------------------------------- Below is a more complete sample on how to use Generics with the new ASP.NET ASP:ObjectDataSource control, and then bind the list to a GridView control. First is the "OrderSystem.cs" file which should be saved within the "Code" directory immediately underneath the application vroot: // OrderSystem.cs: Save within "code" directory using System; using System.Collections.Generic; public class Order { private int _orderId; private string _productName; public Order(int orderId, string productName) { _orderId = orderId; _productName = productName; } public string ProductName { get { return _productName; } } public int OrderId { get { return _orderId; } } } public class OrderSystem { public List<Order> GetOrders() { List<Order> orders = new List<Order>(); orders.Add(new Order(123, "Dell")); orders.Add(new Order(345, "Toshiba")); orders.Add(new Order(567, "Compaq")); return orders; } } I can then write a simple .aspx page that uses the ObjectDataSource control to bind against the "GetOrders" method to retrieve a List of Order objects. I can then point the GridView at the ObjectDataSource control: <%@ page language="C#" %> <html> <body> <form runat="server"> <asp:gridview <headerstyle forecolor="#FFFFCC" backcolor="#990000" font- </headerstyle> </asp:gridview> <asp:objectdatasource </asp:objectdatasource> </form> </body> </html> The PDC was a Blast... I just returned to Seattle from the PDC in LA (I ended up staying an extra day to present to the local .NET User Group on Whidbey Thursday night).
http://weblogs.asp.net/scottgu/archive/2003/11
CC-MAIN-2015-22
refinedweb
689
51.58
This continues last week’s post The making of a shiny mauc, based on Greg McNulty’s mauc blog. It utilizes the RStudio interface to R, Desktop version. Recall, the goal is to make an online shiny app that will run Greg’s code using his data, all supplied in his post. Today we will address what modifications are necessary to show his first plot (below) on a web page. In a subsequent post we will see how to display all Greg’s plots. After that we will see how to upload the app to an online server so anyone can run it in a browser, even if you don’t have R. A shiny app essentially consists of two files - ui.r: defines the user-interface (UI), i.e., the “look” of the web page - server.r: the set of instructions that “services” the UI, i.e, takes input from the user (there is none in Greg’s paper) and generates the output. In our case, all servicing instructions are contained in Greg’s R script, “R code CAS blog v2.R”. As long as those two files are contained in your working directory, and are well-formed, RStudio will be able to create your shiny app. In step 1 we see how to define a UI that displays a plot named “copula_data_plot” (indeed, any ol’ plot named “copula_data_plot”). In step 2 we see where to insert Greg’s code and where to add two new lines so that the copula_data_plot above is served up specifically. Like a painter, we first envision what objects we are going to place on our canvas and where. This is called the “layout.” Since our “painting” will be of a single plot, our layout can be very simple. What do we put into shinyUI so as to define a simple, one plot layout? The shiny Application layout guide discusses some of the basic layouts. Let’s use the first one there, the Sidebar Layout, which happens to display a plot in the Main Area. We currently have no inputs to receive from the Sidebar, so we will adopt the “# Inputs excluded for brevity” technique (from the second Sidebar Layout example on that page). Here is our ui.r file: titlePanel(“Shiny mauc: Version 1”), sidebarLayout( sidebarPanel( # Inputs excluded for brevity ), mainPanel( plotOutput(“copula_data_plot”) ) ) )) All this says is that our “canvas” consists of an (empty) area on the left called the sidebarPanel and an area to its right called the mainPanel. The only thing in the mainPanel will be the output of a plot. In fact, that plot must be named “copula_data_plot” or it won’t show. Put another way, whenever ui.r comes across a plot named “copula_data_plot”, then no matter what that plot looks like, it will display that plot right there in the mainPanel. Oh, and there’s a title at the top. The point of most shiny apps is to “react to” user input, so in the shiny tutorial and elsewhere you will see lots of talk about your shiny app being “reactive.” Beh… not every app has to be “reactive.” Ours certainly won’t be — it’s simply going to take Greg’s input data and display copula_data_plot. We’re not going to make any changes whatsoever. Not change the input data, not fit a different distribution to the ALAE, not choose a different copula than the Gumbel .. Nada! There will be nothing “going on” to which server.r must react. Therefore, per Step 2 of the shiny tutorial, when we “Provide R code to build the object” all we have to do is insert the entirety of Greg’s script, “R code CAS blog v2.R”, into server.r and that’s it! … Well, and add two small lines of code. But then, really, that’s it! ☺ First, we start off with an almost-blank server.r file that looks like this: Open the “R code CAS blog v2.R” file, select all, copy to the clipboard, highlight the contents of line 2 above, and paste the clipboard contents into that spot. Next, from Greg’s blog you will see that his first plot consists of these two lines which send their output to R’s graphics device. “Painting onto a canvas” and “rendering onto a web page” mean the same thing, so to “render” copula_data_plot onto the mainPanel, we wrap those two lines within the shiny renderPlot function. As soon as that function’s result is assigned to an output object named (in R-speak, “$“) copula_data_plot ui.r will see it and recognize it as the plot to display in the mainPanel. So place the two new lines you see below around Greg’s code: save the server.r file, hit the “Run App” button and you will eventually see copula_data_plot Notes: - Close that browser window to return to RStudio, or in RStudio hit escape. - The UI’s title panel is at the top, the empty sidebar panel is to the left, the main panel is to the right, and the only thing there is a plot. - This app turns out to be “reactive” after all, in the sense that if you resize the window the objects magically reposition themselves. Thank you, Shiny!! - The program’s remaining plots can be found in RStudio’s plot window, printed output can be found in RStudio’s console window, and the output csv files can be found in your working directory (getwd()). All those things happen because you are running the desktop version of the app through RStudio. What happens when you run the online version of the app through a browser? Stay tuned and let’s see! That’s it for this week. Next week we will see how to display all Greg’s plots with our shiny mauc. sessionInfo() R version 3.2.3 (2015-12-10) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows >= 8 x64 (build 9200) Or when on my mac: Platform: x86_64-apple-darwin13.4.0 (64-bit) Running under: OS X 10.10.5 (Yosemite) locale: [1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 [3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C [5] LC_TIME=English_United States.1252 Or when on my mac locale: [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 attached base packages: [1] stats4 stats graphics grDevices utils datasets methods base other attached packages: [1] distr_2.5.3 SweaveListingUtils_0.6.2 sfsmisc_1.0-29 [4] startupmsg_0.9 copula_0.999-14 actuar_1.1-10 [7] shiny_0.12.1 loaded via a namespace (and not attached): [1] Rcpp_0.12.3 rstudioapi_0.5 ADGofTest_0.3 xtable_1.7-4 [5] pspline_1.0-17 lattice_0.20-33 R6_2.0.1 tools_3.2.3 [9] grid_3.2.3 htmltools_0.2.6 digest_0.6.9 Matrix_1.2-3 [13] mime_0.3 gsl_1.9-10.1 stabledist_0.7-0 jsonlite_0.9.16 [17] mvtnorm_1.0-4 httpuv_1.3.2 rstudio_0.98.110...
https://www.r-bloggers.com/the-making-of-a-shiny-mauc-chapter-2/
CC-MAIN-2019-51
refinedweb
1,164
74.49
Basics of Python : Calculate Factorial Python is popular and easy to use language. It is widely used in data science, machine learning, Internet of Things (IoT) and field of automation However, it is necessary to learn basics to manipulate statements and customize programs. In this post, I will make a simple factorial program which will illustrate concept of loop and function in Python. To calculate factorial program is given below import numpy as np def factorial(n): fact=1 for i in np.arange(1,n+1): fact=fact * i return f fact=factorial(5) print(“Factorial of the number is”,fact) Output of the program would be Factorial of the number is 120 Statement “import numpy as np” import numpy library of python for arange function. Statement “def factorial(n):” defines a function of name factorial which has a single parameter n. Statement “fact=1” initializes the variable fact having value is equal to 1. Statement “for i in np.arange(1,n+1):” has used arange(start, end) function of numpy which generates number from 1 to n. Statement “fact=fact * i” is the most important statement to calculate factorial. See the execution of the statement for n=5 if i=1 then fact= 1*1 then fact=1 if i=2 then fact= 1*2 then fact=2 if i=3 then fact= 2*3 then fact=6 if i=4 then fact= 6*4 then fact=24 if i=5 then fact= 24*5 then fact=120 Important Note about Indentation Indentation ( Horizontal space before statement) is very important in Python, using indentation Python determines the scope of a statement. Further, you can observe in program that after “def factorial(n):” statement fact=1 is declared with a horizontal space. Otherwise, python interpreter will throw an error ” Indentation is requires at line number…..”. So, be careful about indentation in programming during program making. Conclusion- Learning basics of Python is key point to customize and make your own program. You have to understand decision making statements, loops, functions and array and other basic concepts. In this post, I have illustrated that how to make a factorial function program using for loop. Hope you will understand basic concepts of loop and function. References
https://www.postnetwork.co/basics-of-python-calculate-factorial/
CC-MAIN-2020-24
refinedweb
375
52.39
Introduction Integrating existing systems made up of COBOL/CICS®-based applications with systems running under Web application servers using Java™ is a common requirement. For those facing such a task, this article uses an extended example to show you how to code Java classes that connect to an existing COBOL/CICS program using the J2EE Connector (J2C) tools included in IBM® Rational® Application DeveloperVersion 6.0.0.1. The article then shows you how to test your solution using CICS Transaction Gateway (CTG) and CICS Transaction Server for Windows®, which are included in WebSphere® Developer for zSeries®Version 6.0. The objective of this article is to create and test the code accessing CICS, using WebSphere Developer for zSeries on a workstation, without being required to deploy to the mainframe. You can also create the connectors using only Rational Application Developer, but you would not be able to test the completed connectors. Since WebSphere Developer for zSeries is installed on top of Rational Application Developer, you can use all of the capabilities of both products. This article uses viewlets to present several brief animated examples. When you see the symbol , double-click on it, and an animation sequence will demonstrate a specific development task for you. Connecting to an existing COBOL/CICS program You can connect to COBOL/CICS using many different technologies. This tutorial uses the J2EE Connector Architecture (JCA), which is an open architecture and fully compliant with J2EE specifications. The J2EE Connectors (J2C) tools, resource adapters, and file importers let you create J2EE Connector artifacts, which you can use to create enterprise applications. These tools also enable you to create J2EE applications running on WebSphere Application Server to access operations and data on an Enterprise Information System (EIS) such as CICS or IMS. Running on a J2EE application server provides several advantages: - Security credential management - Connection pooling - Transaction management These advantages are provided by system-level contracts between a resource adapter provided by the connector (CICS Transaction Gateway or IMS Connect, for example), and the application server. You do not need to provide any extra program code, freeing you to concentrate on writing the business code without being concerned with providing quality of service. JCA defines a programming interface, called the Common Client Interface (CCI), and you can use it, with minor changes, to communicate with any EIS. The following diagram illustrates the architecture of the J2C tools within the development environment: Figure 1. J2C wizards and JEE Connector Architecture. Resource adapters A resource adapter is a system-level software driver that is used by a Java application to connect to an enterprise information system (EIS). The resource adapters reside on the application server and provide connectivity between the EIS, the application server, and the enterprise application. Applications deployed on the application server communicate with the resource adapter using the CCI. The resource adapters (RAR files) contain all the information necessary for installing, configuring, and running a JCA Resource Adapter. These adapters can be provided by and used by any vendor, if they comply with the JCA specification. In order for your application to communicate with an EIS like CICS or IMS, you need a resource adapter to create a communication link. J2C tools include a number of resource adapters that let you create and test J2C enterprise applications in a unit test environment. These RAR files can be imported into the workbench and used to create enterprise applications. Two CICS and two IMS resource adapters are shipped with Rational Application Developer V6.0.0.1 and with WebSphere Developer for zSeries: - CICS ECI Adapter 5.1. This version of the CICS ECI resource adapter is based on J2EE Connector Architecture 1.0 (JCA 1.0). Because CICS ECI resource adapter for Java V5.1 is a JCA 1.0 resource adapter, it will only run in a JCA 1.0 application server or WebSphere Application Server V5.0.2 (or later). - CICS ECI Adapter 6.0.0. This version of the CICS ECI resource adapter is based on J2EE Connector Architecture 1.5 (JCA 1.5). This version of CICS ECI resource adapter for Java runs with WebSphere Application Server V6.0 or later. - IMS resource adapter 9.1.0.1.1 - IMS resource adapter 9.1.0.2 Importers In order for your application to process source files from CICS or IMS, the data must be imported and mapped to Java data structures. Two importers are available in Rational Application Developer: the C Importer and the COBOL Importer. These tools let you import C or COBOL programs into your application through a process of data type transformation. The importers map the datatypes in the source file so that your application can access the source material. In other words, if you are coding Java applications to use J2C resource adapters to access transaction programs written in COBOL or C in CICS or IMS, the Java applications must: - Serialize values from Java to the COBOL or C byte buffer that the IMS or CICS program expects. - Deserialize the returned value from the COBOL or C buffer for processing in the Java application. J2C wizard The J2C wizard enables you to create J2C applications, either as standalone programs or as added functionality to existing applications. The wizard dynamically imports your selected resource adapter, lets you set the connection properties to connect to the EIS servers, guides you through the file importing and data mapping steps, and helps you create Java classes and methods to access the transformed source data. The following diagram illustrates the flow of the J2C Java bean wizard through the creation of a J2C Java bean, a data bean, and an optional deployment artifact: Figure 2. Creating J2C artifacts with Rational Application Developer. The process of using the J2C wizard to build a Java application that runs an EIS transaction is summarized by these steps: - The J2C wizard imports C or COBOL definitions of the EIS transaction input and output messages into the Java Data Binding wizard to map to Java data structures. This wizard creates Java data bindings for the input and output messages. - The J2C wizard provides Java data bindings to the J2C Java bean wizard. This wizard creates a J2C Java bean with methods that can be used to run EIS transactions on the host. - The J2C wizard creates a J2EE resource that you can associate with the J2C Java bean. This J2EE resource can be deployed to WebSphere Application Server and used to run your EIS transactions. The types of J2EE resources that can be created from a J2C Java bean are: - JSP - Web - EJB The wizard exports the J2EE resource, packaged as an EAR file, so that it can be deployed to and run on a standalone WebSphere Application Server. Sample application This tutorial will show you how to create an application that connects to a COBOL/CICS program and returns some information. To keep the example and the testing of it simple, we will use an existing COBOL/CICS program that does not have any file or database I/O. Before you start using the wizards to create the adapters, you need to prepare your workstation so you can test the created connectors without having to go to CICS on the mainframe. I. Preparing your workstation for this tutorial To prepare your workstation for testing, you will need to complete these steps, each of which is detailed in the sections below. - Install WebSphere Developer for zSeries V6.0. - Install CICS Transaction Server (CICS TS) for WebSphere Developer. - Install CICS Transaction Gateway (CTG). - Configure CICS TS to support CTG. - Configure CTG to communicate with CICS TS. - Test if CTG is communicating with CICS TS. - Verify that the workspace has the capability to use J2C. - Load the sample COBOL program into WebSphere Developer for zSeries, compile it, and create the DLL. - Install the generated DLL into CICS TS. 1. Install WebSphere Developer for zSeries V6.0 For detailed installation instructions, see the product documentation. Remember that Rational Application Developer V6 must be installed first. You can install WebSphere Developer for zSeries from Disk 1 by executing launchpad.exe. The first screen is shown below. You will need to install CICS TS later. Figure 3. WebSphere Developer for zSeries install dialog. 2. Install CICS Transaction Server for WebSphere Developer (CICS TS) CICS TS for WebSphere Developer is an advanced transaction processing solution for Windows It provides a development environment for CICS applications. It is your option to install it or not when installing WebSphere Developer for zSeries. We will need it for this tutorial. You can use CTG and CICS Universal Clients with CICS TS for Windows. These clients allow access to CICS TS for Windows transactions and programs from separate machines running Windows or UNIX. However, you cannot install CICS TS for WebSphere Developer on a machine on which CICS Universal Clients is already installed. Figure 3 above shows the install dialog. The default CICS install directory is C:\CICSWIN, and we will use this default in this tutorial. 3. Install CICS Transaction Gateway (CTG) The key CICS component located outside the application is the CTG, which is used during application development, after deployment, and at run time. The gateway supports many programming paradigms, and includes a JCA interface that lets you develop applications to take advantage of the many facilities of compliant J2EE Servers. CTG provides both a local (direct to CICS) protocol as well as networked protocols such as TCP and SSL for communication to a CTG Server, which in turn directs calls to CICS Servers. For more information on the various aspects of CTG, such as setup, administration, and program development, see the product documentation. CTG is delivered with WebSphere Developer for zSeries V6 for application testing. Before installing CTG, you will need to use the Windows Add/Remove Programs wizard to remove the CICS Client that is installed with CICS TS. Otherwise, you will receive an error message when trying to install CTG. Use a simple windows path for the CTG installation, such as IBM/CTG. Otherwise it will have a long path name ( C:\Program Files\IBM\IBM CICS Transaction Gateway\) that may cause problems due to Windows path length limits. Select CUSTOM installation and click Change to modify the path. Do not install CTG as a Windows Service unless you will use CTG frequently. For details, see the CTG documentation. CTG can be found with the CDs packaged and shipped with WebSphere Developer for zSeries. You can use the CTG Configuration Task guide during the installation, as shown below in Section 5.). 4. Configure CICS TS to support CTG After you have installed CTG and CICS TS, your next step is to set up the communication links between those two products. For communications, we will use TCP/IP, which should already be correctly configured on your local machine. To configure CICS TS: 4.1. Start CICS: Click Start => IBM CICS Transaction Server for WebSphere => Start CICS => OK. Tip: The Enter key for the CICS is the Ctrl key on the right side of your keyboard. 4.2. Log on to CICS using the default userid sysad and the default password sysad. You should get the message FAA2250I (CICSNT) Signon completed successfully. 4.3. Press Esc (Clear), type the transaction CEDA, and press Ctrl. 4.4. Place a / next to the SIT table and press Ctrl. 4.5. Place a / next to FAASYS and press Ctrl. When FAASYS group is shown, press F8 to advance and F10 to apply an action. F10 positions the cursor in the appropriate place in the window (by Update). Press Ctrl to enter update mode. 4.6. Change the TCP/IP support category. You can add a TCP/IP Local Host Port ( 2345 in our example), which is the port on which the CICS Server TCP/IP service listens. You also must change Maximum TCP/IP Systems to some value greater than 0, such as 5. Press Ctrl twice to save the update and exit update mode: Figure 4. Changing the SIT FAASYS group. 5. Configure CTG to communicate with CICS TS This tutorial uses TCP/IP and assumes that CICS TS and CTG are installed on the same machine. Here are the steps to configure CTG: 5.1. Start CTG: Click Programs => IBM CICS Transaction Gateway => Configuration Tool.). 5.2. Create a CICS Server definition. This task must be done for each CICS Server that you want to connect to. From the window IBM CICS Configuration Tool, select Tools => New Server. 5.3. When the dialog opens, type the server name DEMOTCP, the Server IP address localhost, and the server Port 2345, as shown below: Figure 5. Configuring CTG Client. 5.4. Select the first icon on the left, TCP Select the Enable protocol handle option. Use the default port number 2006, which is the port that must be used in programs that will communicate with CTG, which in turn communicates with CICS TS. Here are the correct TCP settings: Figure 6. Configuring TCP Java gateway. 5.5. Select File => Save, which saves the values into ctg.ini, which is located in the bin directory under the CTG installation directory ( C:\IBM\CTG). 5.6. Close the configuration tool. At this point CTG is ready to communicate with CICS TS. 6. Test if CTG is communicating with CICS TS 6.1. Start CTG: Click Programs => IBM CICS Transaction Gateway => IBM CICS Transaction Gateway. The program will start and the window shown below will open. This window must remain open in order for CTG to function. Figure 7. CTG windows after has started. 6.2. Be sure that CICS Transaction Server and CTG are started. Start the CICS Terminal: click Programs => IBM CICS Transaction Gateway => CICS Terminal. This command will start the 3270 terminal emulation. If your configuration is correct, you will see an emulation window that is connected to CICS TS via CTG. 6.3. Type xxx just to be sure that you are communicating with CICS. You should see a screen similar to Figure 8 below. If CTG is not correctly configured, you will receive a message when the emulator is opened saying that CICS TS is unavailable. You would also receive this message if CICS TS is not running, so verify that CICS TS is running. Figure 8. Emulating 3270 terminal using CTG. 6.4. To stop CTG, enter Ctrl + C in the Gateway console session. If you try to close the CTG console window by using either the "X" close-window icon, or the Close menu item, a full thread dump is produced. Do not stop CICS TS, since we will need it later. Show me an animation of the CTG talking to CICS TS and ready for the tests. 7. Verify that the workspace has capability to use J2C You can now start WebSphere Developer for zSeries. It is a good idea to use a clean workspace, so it will be easy for you to find the generated code. In this example we will use a workspace located in C:\WDz\J2C_Tutorial. 7.1. Using WebSphere Developer for zSeries from any perspective, go to the menu bar and click Window => Preferences. 7.2. On the left side of the Preferences window, expand Workbench (first option in the list). 7.3. Click Capabilities. The Capabilities pane opens. If you would like to receive a prompt when you first use a feature that requires an enabled capability, select Prompt when enabling capabilities. 7.4. Expand Enterprise Java and make sure that Enterprise Java is selected, which is required when using J2C. 7.5. If you made any changes, click Apply, and then click OK to close the Preferences window. Enabling Enterprise Java capabilities will automatically enable any other capabilities that are required to develop and debug J2C applications. 8. Load the sample COBOL program into WebSphere Developer for zSeries, compile it, and create the DLL WebSphere Developer for zSeries should already be running. At this point, you must load the sample COBOL program that is provided with this tutorial. Download COBOL_JCA.zip and unzip it to a temporary directory. (For simplicity's sake, the steps below assume that you have unzipped the downloaded file in C:\temp.) The DLL is delivered with the sample code, and if you want, you can bypass the rest of Section 8 and jump to Section 9 to install the DLL in the CICS TS. (A second download file, COBOL_JCA_SOLUTION.zip contains the complete solution for this tutorial. If you just want to see the results of the tutorial without following the steps, unzip this file and load it into your workspace: select File => Import Project Interchange and then select all of the projects.) 8.1. Create a Local COBOL project to hold our source and executable code and import the required properties. Using the z/OS Projects Perspective from the Workbench, select File => New => Other => Workstation COBOL or PL/I => Local project. Click Next and type Demo_JCA_COBOL_Source as the project name. We need to load the required project properties to build the code, so click Import Project definitions: Figure 9. Creating a Local COBOL project. You must import the properties in the file C:\temp\Demo_JCA_COBOL_Source_properties.xml, Click Next twice and be sure that the Link options are like those shown in Figure 10 below. These options are required to compile and link a COBOL/CICS program. Click Finish to create the project. If you get the Confirm Perspective Switch dialog, click Yes. Figure 10. Local Link Options. 8.2. Import the COBOL source code. There are many ways to import the supplied COBOL code into the project. One easy way is to use drag and drop: - Using the z/OS Projects perspective, go to the Remote Systems view that by default is on the right side of the perspective. - Expand the Local node. - Drag and drop WBCSCUST.cbl to the folder Demo_JCA_Source: Figure 11. Drag and drop the COBOL to the Local project. 8.3. Check the COBOL syntax and build the program to generate a DLL - In the z/OS Projects view, expand Demo_JCA_COBOL_Source and right-click on WBCSCUST.CBL. - Select Nominate as Entry Point. If you don't perform this step, you will have extra DLLs generated with the same name as the project. - Before building the executables, you can use the Local Syntax Check capability of WebSphere Developer for zSeries. Using the Z/OS Projects view, right-click on WBCSCUST.CBL and select Local Syntax Check. The Problems view, in the lower part of the WebSphere Developer for zSeries window, must be empty. A folder named BuildOutputis created. - To generate the DLL, right-click on the project Demo_JCA_COBOL and select Rebuild Project. Since you have no errors, one DLL is generated under the BuildOutputfolder. See in Figure 12 below the contents of the BuildOutputfolder after the Rebuild Project. You will need to export WBCSCUST.dllto be used by CICS TS. Figure 12. COBOL Build output. Select WBCSTDT.dll under the BuildOutput folder. From the WebSphere Developer for zSeries menus, select File => Export, select File system from the Export dialog, and then click Next. Be sure that you are exporting to the directory C:\CICSWIN\WSEntDev, which is the default location to deploy CICS TS applications. There are other ways to make this DLL visible to CICS using classpath CICS variables. For more information about installing DLLs see the CICSTS documentation. 9. Install the generated DLL into CICS TS After the DLL is created and exported to the file system, install it into CICS TS: 9.1. If CICS was not shut down, skip to instruction 3. Otherwise, start CICS TS: select Programs => IBM CICS Transaction Server for Windows => Start CICS, and then click OK. 9.2. Log on to CICS using the default userid sysad and the default password sysad. You should get the message FAA2250I (CICSNT) Sign on completed successfully. The Enter key for CICS is the Ctrl key on the right side of your keyboard. 9.3. You will need to add an entry to the CICS table named PPT (Processing Program table). The transaction CEDA must be used to modify the PPT. In the CICS TS window, enter CEDA and press Ctrl. Select the PPT using /, then press Ctrl. Then move the cursor under the action Add, press Ctrl, and type WBCSCUST as Program Name, WDZ as Group Name, and press Ctrl again. You should see a window like this: Figure 13 Adding a resource to the CICS PPT. 9.4. You can use the Install action to install this resource. Stop CICS: Select Programs => IBM CICS Transaction Server for WebSphere => Stop CICS. Show me an animation of the CICS PPT configuration. II. Building a J2C application to connect to a COBOL/CICS program Before you can start section II, you must have first completed section I, which establishes the environment you will use in this part of the tutorial. This steps in this tutorial must be completed in sequence for it to work properly. Section II will show you how to use the J2C Java bean wizard to connect to a CICS ECI server. While completing the exercises, you will: - Use the J2C Java bean wizard to create a J2C application that interfaces with a CICS transaction using the External Call Interface (ECI) APIs of CICS. - Create a Java method, customerDetail, which accepts a customer number and then calls the COBOL program, which returns the customer detail. - Test the generated J2C application to connect to CICS using CTG. - Read about the various deploy options. 1. Start WebSphere Developer for zSeries This example assumes that the workspace is located in C:\WDz\J2C_Tutorial. 2. Launch the J2C Java bean cheat sheet To make this sequence of steps easier to follow and include explanations of each activity, we will use an Eclipse feature called cheat sheets. which guide you through required wizards to perform certain development tasks. Each cheat sheet lists the sequence of steps required to help you complete the task and explains the purpose of each step. As you progress from one step to the next, the cheat sheet automatically launches the required tools for you. If there is a manual step in the process, the step will tell you to perform the task and click a button in the cheat sheet to move on to the next step. To view relevant help information to guide you through a task, click on the icon to the right of each step in the cheat sheet. These icons eliminate lengthy documentation searches. To launch a cheat sheet from the Workbench, select Help => Cheat Sheets => J2C Java Bean => Creating a Java bean using J2C Java Bean wizard for a CICS or IMS COBOL program and click OK. The cheat sheet window opens. You can resize it so you have more space and can read the instructions more easily, as shown in Figure 14: Figure 14. J2C Java Bean cheat sheet. You can expand the sign next to each step. Click on the icon shown in Figure 15 below to start the sequence. We will use our sample COBOL program instead of the product-provided COBOL code. Figure 15. J2C Java Bean Wizard Introduction cheat sheet. 3. Review importer preferences This is an optional task. You can change the preferences later if you never use C programs. For now, skip this step by clicking . 4. Open the J2EE perspective Start by opening a J2EE perspective: click on . When the J2EE Perspective opens, the cheat sheet window is resized: Figure 16. Cheat sheets: Open the J2EE Perspective. 5. Select the back-end system This tutorial uses CICS as the back-end system (actually, the CICS TS product that is part of WebSphere Developer for zSeries). Click on , as shown in Figure 17. When the dialog asks for the back-end system, select CICS and click OK. Figure 17. Cheat sheets: Select the back-end system. 6. Select a sample COBOL program Skip this step, since we have our own COBOL source. Click on to skip, as shown below: Figure 18. Cheat sheets: Open the J2EE perspective. 7. Your COBOL sample location Our COBOL program is in the project folder Demo_JCA_COBOL_Source, which we created earlier. Under the J2EE perspective, this folder is in the Other Projects folder. Click on the icon: Figure 19. Cheat sheets: Your COBOL sample program. 8. Create Java mapping with the standalone CICS/IMS Java Binding wizard The J2C Java Data Binding wizard helps you create specialized Java classes representing the input and output messages of a CICS transaction. These messages correspond to the COBOL data structures of the CICS application program. The resulting specialized Java classes are called data binding classes, which are used as input or output types for invoking CICS functions. After you create the data binding classes, you can use the J2C wizard to create a Java bean containing a method that uses the binding classes. The cheat sheet will guide you through this sequence: When you click on , it starts the wizard: Figure 20. Cheat sheets: Create Java mapping Since we are using an existing COBOL program, click COBOL to Java as the mapping type. The other COBOL option (COBOL MPO to Java) is for output data bindings only. Use Browse to locate the COBOL source code in the folder Demo_JCA_COBOL_Source inside your workspace. Click Open to close the Open dialog and then click Next in the Import dialog window: Figure 21. Importing the COBOL code. The platform we are using is Win32, but you could use other options such as AIX, z/OS. or Not Specified. Click Query and you will see the data structures of your COBOL Program. You must select the DFHCOMMAREA, since this is the COBOL area we will use for communication: Figure 22. Choose the COBOL communication area. The connector code may be used by a number of different projects. For this reason, we place the generated code in a separate project. In the Project Name field, type a name like Demo_JCA_Connectors and click New. Another dialog opens, where you select the type of project for this source. Select Java project, click Next, and then click Finish. Back in the Import dialog box, type demo.cobol.cics for Package name. The class name is DFHCOMMAREA (default). Click Finish to create the data bean. Sample Import dialog: Figure 23. Creating the project, package, and Java class name. 9. Launching the J2C Java bean wizard You can create a J2C Java bean using the J2C Java bean wizard. After you have created the Java bean and the wizard is closed, you can add additional data binding classes using the J2C Data Binding wizard, and then add methods to the Java bean through the Snippets view. Alternatively, you can use the wizard to create the data bean and add methods to use the data bean. In both cases, you can use the wizard to create a J2EE artifact (JSP, EJB, or Web service) to use to deploy your J2C Java bean. Click to continue: Figure 24. Cheat sheets: Launching the J2C Java Bean wizard. In the Resource Adapters Selection page, select the type of resource adapter you want to use. We will use CICS ECI resource adapter V5.1. After you have selected the appropriate resource adapter, click Next: Figure 25. Selecting the ECI Resource adapter. In the Connection Properties page, select Non-managed Connection and uncheck Managed Connection. For this tutorial, you will use the non-managed connection to directly access the CICS server, so you do not need to provide the JNDI name. In a "real world" application, a managed connection is recommended. Accept the default Connection class name of com.ibm.connector2.cics.ECIManagedConnectionFactory. We will provide additional information in order to connect to the CICS server. The Connection URL field is required; the others are optional. - Connection URL -- Server address of the CICS ECI server. Since we are on the same workstation as CICS, use localhost. - Server name -- Name of the CTG server. Use DEMOTCP, defined above in Section I, Step 5.3. - Port number -- Number of the port used to communicate with the CTG. The default port is 2006. - User name -- User name for the connection. Type SYSAD(uppercase). - Password -- Password for the connection. Type SYSAD(uppercase). When you have provided the required connection information, click Next. See the values in Figure 26 below: Figure 26. Defining the Connection Properties. As the project name, type in the one that we created above: Demo_JCA_Connectors. Alternatively, you can click Browse and select the project name. For the package name, type demo.cobol.cics. As interface name type CallCICSCOBOL, as shown in Figure 27 below. Click Next to define the Java methods: Figure 27. Defining the J2C Java Bean Output properties. You will now create a Java method that will use the COBOL importer to map the data types between the COBOL source and the data in your Java method: - In the Java Method page, click Add. - In the Java method name field, type customerDetailfor the name of the operation. Click Next. - When the Add Java Method page opens, click the first Browsebutton, select DFHCOMMAREA, and click OK. - Also in this Add Java Method dialog, check Use the input type for output and click Finish: Figure 28. Adding a Java Method. You are not quite finished -- you still need to specify the function name (equates to the CICS program name) that will be called inside of CICS. In an earlier step, we used WBCSCUST when adding the DLL to the PPT. Optionally, if you click Show Advanced, you can specify other CICS parameters, such as transaction name. Click Finish: Figure 29. Cheat sheets: Your COBOL sample program. The Java connector code is now generated. Using the J2EE Perspective, you will see that code under Other Projects in the Demo_JCA_Connectors folder: Figure 30. Java code generated. 10. Editing generated code with the Java snippet editor No updates are required to user this generated Java code ( CallCICSCOBOL and CallCICSCOBOLImpl), though updates may be needed for more complex applications. Figure 31 shows this step. Click on to skip this step, since we are not making changes. Figure 31. Cheat sheets: Editing generated code. 11. Import the sample test program into your project At this point, you can test the generated connectors using a simple Java program named TestClient.java that has been provided in the download file. Figure 32. Cheat sheets: Import the test program. Using the same technique that you used before to import the COBOL source code, import this Java program: Using the z/OS Projects perspective, go to the Remote Systems view that by default is located at the right, expand the Local node, and find the directory where you unzipped the download code. Then just drag and drop TestClient.java, the test program, to the folder Demo_JCA_Source: Figure 33. TestClient.java imported and ready to run. This test class instantiates the class CallCICSCOBOLImpl and uses the Java bean DFHCOMMAREA. For test purposes, this class requests that customer detail be returned for customer number 2 by invoking the CallCICSCOBOLImpl class. The results will be printed to the WebSphere Developer for zSeries console view. Source code for this class: package demo.cobol.cics ; public class TestClient { public static void main(String[] args ) { try { CallCICSCOBOLImpl proxy = new CallCICSCOBOLImpl (); DFHCOMMAREA record = new DFHCOMMAREA(); int cust_num = 2; record.setCustNo (cust_num); System.out.println ("Invoking CICS Service for customer number " + cust_num); record = proxy.customerDetail (record); System.out.println ("CICS Results"); System.out.println ("First " + record.getFirstName ()); System.out.println ("Last " + record.getLastName ()); System.out.println ("Address " + record.getAddress1()); System.out.println ("City " + record.getCity ()); System.out.println ("State " + record.getState ()); System.out.println ("Country " + record.getCountry ()); } catch (Exception exc ) { System.out.println ( exc ); exc.printStackTrace (); } } } Figure 34. Java class TestClient 12. Test your application using the test program Before you test your code, start CICS TS and CTG if they are not already started. For details, see the instructions in Section I. Figure 35. Cheat sheets: Test your application. Using the J2EE perspective, expand the folder Other projects and the package demo.cobol.cics. Right-click on TestClient.java and select Run => Java Application. The Java class will call the COBOL/CICS program. You can see the results in the Java Console view at the bottom of the WebSphere Developer for zSeries workbench: Figure 36. Test results. 13. Launching the standalone J2EE Resource from J2C Java Bean wizard Using the J2EE Perspective and Project Explorer view, right-click on CallCICSCOBOLImpl, select New => Other => J2C => Web Page, Web Service, or EJB from J2C Java Bean, and click Next. As you see in Figure 37 below, from here you can use the J2C Java bean in your implementation. The wizard provided by WebSphere Developer for zSeries can create a JSP, EJB, or Web service: Figure 37. Deployment capabilities. In this article we will not do any of those implementations since it is all documented. Just press the icon in the cheat sheet to find out more about creating these resources. You can skip this task, so select to continue: Figure 38. Cheat sheets: Launching the standalone J2EE Resource. 14. Deploying J2C application Application deployment depends on your implementation and is well documented, so it will not be covered here. WebSphere Developer for zSeries can help you to deploy the J2C application, and gives you three main choices: - Deploy the J2C application to an EJB. Once you have created your J2C application, you can create an EJB to wrap it. For more information on creating EJBs, see the EJB documentation in the help. - Deploy the J2C application as a Web service. Once you have created your J2C application, you can create a Web Service for it. - Create a Web page, EJB, or Web Service to deploy your J2C Java bean. The wizard lets you create a JSP or Faces JSP, an EJB, or a Web service that wraps the functionality provided by a J2C Java bean. Figure 39. Cheat sheets: deploying the J2C application. Remember that we used the cheat sheet dialogs to illustrate the sequence of operations. As you saw, some operations are optional, but cheat sheets are a good way to learn when you have no idea how to start the process of creating the Java connectors. After you become familiar with the tools, you will not need to use these dialogs anymore, and you can just call the wizards directly. All 14 steps above can be replaced by the sequence File => New => Other => J2C and the options listed under J2C under any perspective. Show me an animation of the J2C Application Build using J2C wizards (no cheat sheets). Conclusion WebSphere Developer for zSeries can help developers who want to test J2C applications and do not want to use the mainframe for the development tasks. All of the wizards shown here are part of Rational Application Developer (except for the capability to generate a DLL from COBOL code and test the code using CICSTS and CTG). So with WebSphere Developer for zSeries, you can work with this type of application end-to-end on your workstation. Acknowledgements The author wishes to thank Brian S. Colbert, Veronique Quiblier, and Wilbert Kho of IBM for reviewing this article. Downloads Resources - WebSphere Developer for zSeries documentation - IBM WebSphere Developer Technical Journal: Using WebSphere Studio V5 to Connect to COBOL/CICS - IBM Redbook: Exploring WebSphere Studio Enterprise Developer V5.
http://www.ibm.com/developerworks/websphere/library/techarticles/0509_barosa/0509_barosa.html
CC-MAIN-2015-18
refinedweb
5,912
56.86
Define a set of one or more 2D primitives. More... #include <VertexArray.hpp> Define a set of one or more 2D primitives. sf::VertexArray is a very simple wrapper around a dynamic array of vertices and a primitives type. It inherits sf::Drawable, but unlike other drawables it is not transformable. Example: Definition at line 45 of file VertexArray.hpp. Default constructor. Creates an empty vertex array. Construct the vertex array with a type and an initial number of vertices. Add a vertex to the array. Clear the vertex array. This function removes all the vertices from the array. It doesn't deallocate the corresponding memory, so that adding new vertices after clearing doesn't involve reallocating all the memory. Compute the bounding rectangle of the vertex array. This function returns the minimal axis-aligned rectangle that contains all the vertices of the array. Get the type of primitives drawn by the vertex array. Return the vertex count. Get a read-write access to a vertex by its index. This function doesn't check index, it must be in range [0, getVertexCount() - 1]. The behavior is undefined otherwise. Get a read-only access to a vertex by its index. This function doesn't check index, it must be in range [0, getVertexCount() - 1]. The behavior is undefined otherwise. Resize the vertex array. If vertexCount is greater than the current size, the previous vertices are kept and new (default-constructed) vertices are added. If vertexCount is less than the current size, existing vertices are removed from the array. Set the type of primitives to draw. This function defines how the vertices must be interpreted when it's time to draw them:
https://www.sfml-dev.org/documentation/2.3/classsf_1_1VertexArray.php
CC-MAIN-2018-05
refinedweb
280
61.02
Synopsis Contributed CIAO routines for Sherpa. Syntax import sherpa_contrib.all from sherpa_contrib.all import * Description The sherpa_contrib package provides a number of useful routines for Sherpa users, and is provided as part of the CIAO contributed scripts package. The module can be loaded into Sherpa by using either of: from sherpa_contrib.all import * import sherpa_contrib.all This will load in all the modules that are part of this package; this currently includes: See the ahelp files for the individual packages and the contributed scripts page for further information. X-Spec routines This module does not load in the X-Spec convolution models provided by the sherpa_contrib.xspec.xsconvolve module. Changes in the scripts 4.8.2 (January 2016) release New routine The sherpa_contrib.utils.renorm() function was added in this release. ChART version 2 support The sherpa_contrib.chart module has been updated to support version 2 of ChART. Fixed routine The sherpa_contrib.utils.estimate_weighted_expmap() function has been updated to work in CIAO 4.8. Changes in the December 2010 Release Module name change To load in all the Sherpa contributed modules you now have to say one of import sherpa_contrib.all from sherpa_contrib.all import * rather than import sherpa_contrib from sherpa_contrib import * which now no longer loads in any code. Removed modules The sherpa_contrib.primini and sherpa_contrib.flux_dist modules have been removed since their functionality is now included in Sherpa. Bugs See the bugs pages for an up-to-date listing of known bugs.
http://cxc.cfa.harvard.edu/sherpa4.11/ahelp/sherpa_contrib.html
CC-MAIN-2019-30
refinedweb
244
52.76
I've got a problem handling self referencing model in my application using Automapper Projections. This is how my models look like: public class Letter { public int? ParentId {get; set;} public Letter ParentLetter {get; set; public int Id {get; set;} public string Title {get; set;} public string Content {get; set;} public DateTime? ReceivedTime {get; set;} public DateTime? SendingTime {get; set;} public List<Destination> Destinations {get; set;} public List<Letter> Responses {get; set;} } public class LetterView { public int? ParentId {get; set;} public int Id {get; set;} public string Title {get; set;} public string Content {get; set;} public DateTime? ReceivedTime {get; set;} public DateTime? SendingTime {get; set;} public List<DestinationView> Destinations {get; set;} public List<LetterView> Responses {get; set;} } public class Destination { public int Id {get; set;} public string Name {get; set;} .. } public class DestinationView { public int Id {get; set;} public string Name {get; set;} } // The mapping: CreateMap<Destination, DestinationView> CreateMap<Letter, LetterView> My problem is with mapping Letter to LetterView. The problem is somewhere with the Responses, I just can't figure out what should be changed. Whenever running unit tests and asserting mapping configurations everything works, as well as mapping a letter with multiple responses to a view model. However, whenever I get a letter with it's resposnes from the database (Entity framework 6), the projection to LetterView throws a stackoverflow exception. Can anyone please explain me why this happens only on projection? What should I change? A couple of options here, but usually the best choice is to set a max depth on the Responses. AutoMapper will try to spider the properties, and you've got a self-referencing DTO. First try this: CreateMap<Letter, LetterView>() .ForMember(d => d.Responses, opt => opt.MaxDepth(3)); Another option is to pre-wire your DTOs with a specific depth. You'd create a LetterView, and a ChildLetterView. Your ChildLetterView would not have a "Responses" property, giving you exactly 2 levels of depth on your DTO side. You can make this as deep as you want, but be very explicit in the DTO types in where they are in the hierarchy with Parent/Child/Grandchild/Greatgrandchild type names.
https://entityframeworkcore.com/knowledge-base/36935719/automapper-self-referencing-model-projection
CC-MAIN-2022-21
refinedweb
356
56.05
Let’s get started by doing “the simplest thing that could possibly work”. public class DynamicDataTable : DynamicObject { private readonly DataTable _table; public DynamicDataTable(DataTable table) { _table = table; } } For now, we’ll use a DataTable for the actual storage and a DynamicObject to provide an implementation of IDynamicMetaObjectProvider. What can we accomplish with this? Well, quite a lot, actually – in a very real sense, we’re only limited by our imagination. GetMember The first ability we want is to be able to extract a column out of the data table; given a DynamicDataTable “foo”, the expression “foo.Bar” should give us something enumerable that represents the data in the column. The DLR describes this operation as “get member”, and DLR-based languages implement a GetMemberBinder in order to bind a dynamic “get member” operation. DynamicObject makes it very easy for us to handle the GetMemberBinder. We simply override the virtual method TryGetMember and implement the behavior that we want. The binder has two properties: Name, which indicates the name of the member that is being bound, and IgnoreCase. You can reasonably expect that case-sensitive languages like C#, Ruby and Python will set IgnoreCase to false, while VB will set it to true. private DataColumn GetColumn(string name, bool ignoreCase) { if (!ignoreCase) { return _table.Columns[name]; } for (int i = 0; i < _table.Columns.Count; i++) { if (_table.Columns[i].ColumnName.Equals(name, StringComparison.InvariantCultureIgnoreCase)) { return _table.Columns[i]; } } return null; } public override bool TryGetMember(GetMemberBinder binder, out object result) { var c = _table.Columns[binder.Name]; if (c == null) { return base.TryGetMember(binder, out result); } var a = Array.CreateInstance(c.DataType, _table.Rows.Count); for (int i = 0; i < _table.Rows.Count; i++) { a.SetValue(_table.Rows[i], i); } result = a; return true; } Here I’ve chosen to return an Array whose elements are typed identically to the column’s original data type. That’s because it’s very easy to create an Array of a particular type and to set its individual elements from the System.Objects that we can get from the DataRow. By factoring out GetColumn into a separate method, I’ve made it easy to change just this logic. We might want, for instance, to allow a symbol name like “hello_world” to match the column named “hello world”. Non-dynamic members What if I want to directly access other properties of the DataTable like the “Rows” DataRowCollection? The design of the DLR makes this easy. If you don’t handle a binding operation yourself, it’s possible to fall back to a default behavior implemented by the language-provided binder. And for VB, C#, Python and Ruby, the fallback behavior is to treat the object like a normal .NET object and to access its features via Reflection. That’s why it’s useful to call base.TryGetMember instead of throwing an exception when the column name can’t be found. So if we implement a trivial “Rows” property, a reference to DynamicDataTable.Rows will return DataTable.Rows even when the GetMember is performed dynamically at runtime (unless there actually is a column named “Rows”…). public DataRowCollection Rows { get { return _table.Rows; } } SetMember The next interesting thing we want to be able to do is to set a column on the DataTable whether or not it already exists. The DLR describes this operation as “set member”, and defines a corresponding SetMemberBinder to perform the binding operation. Like the GetMemberBinder, this class has two properties: Name and IgnoreCase. We want to be able to set the column either to a single repeated constant value or to a list of values. But there are lots of different lists we might like to support: for instance, lists, collections or even plain IEnumerables. Let’s make some decisions about the semantics of the SetMember operation on our type: - If the object’s type implements IEnumerable and the object isn’t a System.String, then we’ll treat it like an enumeration. Otherwise, we’ll treat it like a single value. - If it’s an IEnumerable<T> we’ll use the generic type as our DataType. For a plain IEnumerable, the DataType will be System.Object. - If the object does not implement IEnumerable (or the object is a System.String) then the DataType will be the object’s actual RuntimeType. For an enumeration, we’ll read items into a temporary array until we reach the number of rows in the table. If the enumeration ends before then, we’ll raise an error. If at that point, there are still additional items remaining in the enumeration, then we’ll also raise an error. The specific behavior of our implementation for each of these types isn’t very important. What is important is that we’ve identified all the types that we expect we might get, and have identified the logic we’re going to implement for those types. Now, on to the code! public override bool TrySetMember(SetMemberBinder binder, object value) { Type dataType; IEnumerable values = (value is string) ? null : (value as IEnumerable); bool rangeCheck = (values != null); if (values != null) { dataType = GetGenericTypeOfArityOne(value.GetType(), typeof(IEnumerable<>)) ?? typeof(object); } else { values = ConstantEnumerator(value); dataType = (value != null) ? value.GetType() : typeof(object); } object[] data = new object[_table.Rows.Count]; var nc = values.GetEnumerator(); int rc = _table.Rows.Count; for (int i = 0; i < rc; i++) { if (!nc.MoveNext()) { throw new ArgumentException(String.Format("Only {0} values found ({1} needed)", i, rc)); } data[i] = nc.Current; } if (rangeCheck && nc.MoveNext()) { throw new ArgumentException(String.Format("More than {0} values found", rc)); } var c = GetColumn(binder.Name, binder.IgnoreCase); if (c != null && c.DataType != dataType) { _table.Columns.Remove(c); c = null; } if (c == null) { c = _table.Columns.Add(binder.Name, dataType); } for (int i = 0; i < rc; i++) { _table.Rows[i] = data[i]; } return true; } (GetGenericTypeOfArityOne and ConstantEnumerator are methods whose names are pretty self-explanatory – and whose implementations can be found in the downloadable source code). Armed with these two methods, our type now supports all of the operations we need to implement the sample program described in Part 0 of this series. A version of the complete source code can be downloaded this location. In Part 2, we’ll add the ability to perform numerical operations between columns. See you then! Thank you for submitting this cool story – Trackback from DotNetShoutout The next thing we want for our dynamic DataTable is to do calculations between one or more columns. Imagine
https://blogs.msdn.microsoft.com/curth/2009/05/24/dynamicdatatable-part-1/
CC-MAIN-2019-22
refinedweb
1,071
58.99
# IncrediBuild: How to Speed up Your Project's Build and Analysis *"How much longer are you going to build it?"* - a phrase that every developer has uttered at least once in the middle of the night. Yes, a build can be long and there is no escaping it. One does not simply redistribute the whole thing among 100+ cores, instead of some pathetic 8-12 ones. Or is it possible? ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/08a/29b/953/08a29b953e69252d23570c54bd1fe4b9.png)I need more cores! ------------------ As you may have noticed, today's article is about how to speed up compilation as well as static analysis. But what does speeding up compilation have to do with static analysis? It is simple - what boosts compilation also speeds up the analysis. And no, this time we will not talk about any specific solutions, but will instead focus on the most common parallelization. Well, everything here seems to be simple – we specify the physically available number of processor cores, click on the build command and go to drink the proverbial tea. But with the growth of the code base, the compilation time gradually increases. Therefore, one day it will become so large that only the nighttime will remain suitable for building an entire project. That's why we need to think about how to speed up all this. And now imagine - you're sitting surrounded by satisfied colleagues who are busy with their little programming chores. Their machines display some text on their screens, quietly, without any strain on their hardware... *"I wish I could take the cores from these idlers..."* you might think. It would be right thing to do, as it's rather easy. Please, do not take my words to heart arming yourself with a baseball bat! However, this is at your discretion :) Give it to me! -------------- Since it is unlikely that anyone will allow us to commandeer our colleagues' machines, you'll have to go for workarounds. Even if you managed to convince your colleagues to share the hardware, you will still not benefit from the extra processors, except that you can choose the one that is faster. As for us, we need a solution that will somehow allow us to run additional processes on remote machines. Fortunately, among thousands of software categories, the [distributed build system](https://en.wikipedia.org/wiki/Build_automation) that we need has also squeezed in. Programs like these do exactly what we need: they can deliver us the idling cores of our colleagues and, at the same time, do it ~~without their knowledge~~ in automatic mode. Granted, you first need to install all of this on their machines, but more on that later... Who will we test on? -------------------- In order to make sure that everything is functioning really well, I had to find a high-quality test subject. So I resorted to open source games. Where else would I find large projects? And as you will see below, I really regretted this decision. However, I easily found a large project. I was lucky to stumble upon an open source project on Unreal Engine. Fortunately, IncrediBuild does a great job parallelizing projects on UnrealBuildSystem. So, welcome the main character of this article: [Unreal Tournament](https://github.com/EpicGames/UnrealTournament). But there is no need to hurry and click on the link immediately. You may need a couple of additional clicks, see the details \*[here](https://www.unrealengine.com/en-US/ue4-on-github)\*. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/80e/619/297/80e6192973b05b4d2d99f632da5b9f72.png)Let the 100+ cores build begin! ------------------------------- As an example of a [distributed build system](https://en.wikipedia.org/wiki/Build_automation), I'll opt for [IncrediBuild](https://www.incredibuild.com/). Not that I had much choice - we already have an IncrediBuild license for 20 machines. There is also an open source [distcc](https://github.com/distcc/distcc), but it's not that so easy to configure. Besides, almost all our machines are on Windows. So, the first step is to install agents on the machines of other developers. There are two ways: * ask your colleagues via your local Slack; * appeal to the powers of the system administrator. Of course, like any other naive person, I first had asked in Slack... After a couple of days, it barely reached 12 machines out of 20. After that, I appealed to the power of the system admin. Lo and behold! I got the coveted twenty! So, at that point I had about 145 cores (+/- 10) :) What I had to do was to install agents (by a couple of clicks in the installer) and a coordinator. This is a bit more complicated, so I'll leave a [link to the docs](https://incredibuild.atlassian.net/wiki/spaces/IUM/pages/1498185796/Installing+the+Coordinator+with+an+Agent). So now we have a distributed build network on steroids, therefore it's time to get into Visual Studio. Already reaching to a build command?... Not so fast :) If you'd like to try the whole process yourself, keep in mind that you first need to build the *ShaderCompileWorker* and *UnrealLightmass* projects. Since they are not large, I built them locally. Now you can click on the coveted button: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/30a/002/0f1/30a0020f1fe3b593bf19e5200c439b82.png)So, what's the difference? ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/6ed/f51/d0f/6edf51d0f4a85c9c4f0a3e9daa4d4ab2.png)As you can see, we managed to speed up the build from 30 minutes to almost 6! Not bad indeed! By the way, we ran the build in the middle of a working day, so you can expect such figures on a real test as well. However, the difference may vary from project to project. What else are we going to speed up? ----------------------------------- In addition to the build, you can feed IncrediBuild with any tool that produces a lot of subprocesses. I myself work in PVS-Studio. We are developing a static analyzer called [PVS-Studio](https://www.viva64.com/en/pvs-studio/). Yes, I think you already guessed :) We will pass it to IncrediBuild for parallelization. Quick analysis is as agile as a quick build: we can get local runs before the commit. It's always tempting to upload all of files at once to the master. However, your teamlead may not be happy with such actions, especially when night builds crash on the server... Trust me – I went through that :( The analyzer won't need specific configurations, except we can specify good old 145 analysis threads in the settings: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/81f/120/fbd/81f120fbd713e761c6503884a0668155.png)Well, it is worth showing to the local build system who is the big analyzer here: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/8c5/ab6/e2c/8c5ab6e2cf1d082ab4ff038a6c62ecec.png)*Details \**[*here*](https://www.viva64.com/en/b/0666)*\** So, it's time to click on the build again and enjoy the speed boost: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/0ab/f4d/29d/0abf4d29d8a35f8e202f96687b5f0ec2.png)It took about seven minutes, which is suspiciously similar to the build time... At this point I thought I probably forgot to add the flag. But in the Settings screen, nothing was missing... I didn't expect that, so I went to study manuals. Attempt to run PVS-Studio #2 ---------------------------- After some time, I recalled the Unreal Engine version used in this project: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/f06/8cb/987/f068cb9870e17204f56147b62cf26436.png)Not that this is a bad thing in itself, but the support for -StaticAnalyzer flag appeared much later. Therefore, it is not quite possible to integrate analyzer directly. At about this point, I began to think about giving up on the whole thing and having a coffee. After a couple of cups of refreshing drink, I got the idea to finish reading the [tutorial on integration of the analyzer](https://www.viva64.com/en/b/0666/) to the end. In addition to the above method, there is also the 'compilation monitoring' one.  This is the option for when nothing else helps anymore. First of all, we'll enable the monitoring server: ``` CLMonitor.exe monitor ``` This thing will run in the background watching for compiler calls. This will give the analyzer all the necessary information to perform the analysis itself. But it can't track what's happening in IncrediBuild (because IncrediBuild distributes processes on different machines, and monitoring works only locally), so we'll have to build it once without it. A local rebuild looks very slow in contrast to a previous run: ``` Total build time: 1710,84 seconds (Local executor: 1526,25 seconds) ``` Now let's save what we collected in a separate file: ``` CLMonitor.exe saveDump -d dump.gz ``` We can use this dump further until we add or remove files from the project. Yes, it's not as convenient as with direct UE integration through the flag, but there's nothing we can do about it - the engine version is too old. The analysis itself runs by this command: ``` CLMonitor.exe analyzeFromDump -l UE.plog -d dump.gz ``` Just do not run it like that, because we want to run it under IncrediBuild. So, let's add this command to *analyze.bat.* And create a *profile.xml* file next to it: ``` xml version="1.0" encoding="UTF-8" standalone="no" ? ``` *Details \**[*here*](https://www.viva64.com/en/m/0041/)*\** And now we can run everything with our 145 cores: ``` ibconsole /command=analyze.bat /profile=profile.xml ``` This is how it looks in the Build Monitor: ![There are a lot of errors on this chart, aren't there? ](https://habrastorage.org/r/w1560/getpro/habr/upload_files/754/050/a39/754050a39c7fc57623aa061b7d580f8e.png "There are a lot of errors on this chart, aren't there? ")There are a lot of errors on this chart, aren't there? As they say, troubles never come singly.  This time, it's not about unsupported features. The way Unreal Tournament build was configured turned out to be somewhat... 'specific'. Attempt to run PVS-Studio #3 ---------------------------- A closer look reveals that these are not the errors of the analyzer. Rather, a  [failure of source code preprocessing](https://www.viva64.com/en/w/v008/). The analyzer needs to preprocess your source code first, so it uses the information it 'gathered' from the compiler. Moreover, the reason for this failure was the same for many files: ``` ....\Build.h(42): fatal error C1189: #error: Exactly one of [UE_BUILD_DEBUG \ UE_BUILD_DEVELOPMENT UE_BUILD_TEST UE_BUILD_SHIPPING] should be defined to be 1 ``` So, what's the problem here? It's pretty simple – the preprocessor requires only one of the following macros to have a value of '1': * UE\_BUILD\_DEBUG; * UE\_BUILD\_DEVELOPMENT; * UE\_BUILD\_TEST; * UE\_BUILD\_SHIPPING. At the same time, the build completed successfully, but something really bad happened now. I had to dig into the logs, or rather, the compilation dump. That's where I found the problem. The point was that these macros are declared in the local *precompiled header,* whereas we only want to preprocess the file. However, the include header that was used to generate the precompiled header is different from the one that is included to the source file! The file that is used to generate the precompiled header is a 'wrapper' around the original header included into the source, and this wrapper contains all of the required macros.  So, to circumvent this, I had to add all these macros manually: ``` #ifdef PVS_STUDIO #define _DEBUG #define UE_BUILD_DEVELOPMENT 1 #define WITH_EDITOR 1 #define WITH_ENGINE 1 #define WITH_UNREAL_DEVELOPER_TOOLS 1 #define WITH_PLUGIN_SUPPORT 1 #define UE_BUILD_MINIMAL 1 #define IS_MONOLITHIC 1 #define IS_PROGRAM 1 #define PLATFORM_WINDOWS 1 #endif ``` *The very beginning of the build.h file* And with this small solution, we can start the analysis. Moreover, the build will not crash, since we used the special PVS\_STUDIO macro, that is declared only for the analysis. So, here are the long-awaited analysis results: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/2ee/92d/cd0/2ee92dcd062e54a5c412f4eb68d55989.png)You should agree, that almost 15 minutes instead of two and a half hours is a very notable speed boost. And it's really hard to imagine that you could drink coffee for 2 hours straight in a row, and everyone would be happy about it. But a 15-minute break does not raise any questions. Well, in most cases... As you may have noticed, the analysis was very well fit for a speed up, but this is far from the limit. Merging logs into the final one takes a final couple of minutes, as is evident on the Build Monitor (look at the final, single process). Frankly speaking, it's not the most optimal way – it all happens in one thread, as is currently implemented... So, by optimizing this mechanism in the static analyzer, we could save another couple of minutes. Not that this is critical for local runs, but runs with IncrediBuild could be even more eye-popping... And what do we end up with? --------------------------- In a perfect world, increasing the number of threads by *a factor of N* would increase the build speed by the same *N* factor. But we live in a completely different world, so it is worth considering the local load on agents (remote machines), the load and limitations on the network (which must carry the results of the remotely distributed processes), the time to organize all this undertaking, and many more details that are hidden under the hood. However, the speed up is undeniable. For some cases, not only will you be able to run an entire build and analysis once a day, but do it much more often. For example, after each fix or before commits. And now I suggest reviewing how it all looks in a single table: ![I had five runs and calculated the average for them. You've seen these figures in the charts :)](https://habrastorage.org/r/w1560/getpro/habr/upload_files/587/8a9/cf1/5878a9cf185d53e709c739a00a58b8e9.png "I had five runs and calculated the average for them. You've seen these figures in the charts :)")I had five runs and calculated the average for them. You've seen these figures in the charts :)
https://habr.com/ru/post/557806/
null
null
2,371
57.37
Introduction: Using IR Remote With Raspberry Pi Without LIRC I wanted get an IR remote input to Raspberry Pi. I manged to get LIRC installed and tested. Everything was ok, except the very last step. When I wanted pass the IR remote Key value to Python program it doesn't pass it correctly. It passes null value for anykey. I couldn't figure out what is wrong. I gave up and then I try to write a python code to capture IR remote without using LIRC. After some reading about how IR remote communicate the info revealed that uses UART serial communication. I used IR remote DIY Kit HX1838. The IR sensor decodes the IR waves and passes the data serially. What I did was to read the data value coming out of IR sensor serially. This is a crude but a simple way of reading IR remote for simple applications that can be used in Raspberry Pi. Preparing Raspberry Pi for UART serial communication. 1. Need to remove ttyAMA0 entries in cmdline.txt. -′ and ‘kgdboc=ttyAMA0,115200′. sudo nano /boot/cmdline.txt The remaining file looks like, dwc_otg.lpm_enable=0 console=tty1 root=/dev/mmcblk0p6 rootfstype=ext4 elevator=deadline rootwait Then save and close the editor. Save the file, Ctrl + O. Close the editor, Ctrl + X 2. Update the inittab file to mask the ttyAMA0 sudo nano /etc/inittab Comment out the line ‘X:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100′ #X:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100 Then save and close the editor. Save the file, Ctrl + O. Close the editor, Ctrl + X Step 1: Getting Started Installing pySerial - To get the serial (UART) communication working need to install the Serial module. sudo apt-get install python-serial Once this is installed Python code can use it by doing import serial. - Next, need to wire GPIO 14 (TX) and GPIO 15 (RX). Since my aim is to receive the IR signals I wired only GPIO 15 (RX). - The IR sensor require 5V & GND connection. Then output signal of IR sensor connected to GPIO 15. The Python code to read the IR signal found out to be very very simple. As follows. import serial ser = serial.Serial ("/dev/ttyAMA0") ser.baudrate = 2400 for i in range (0,15): # usually IR signal for a key is about 12-16 bytes data = ser.read(1) # read 1 byte at a time print ord(data) # the data read in character, ord will convert to ASCII value Now this code will read IR signal 1 byte at a time and prints out the value. I tried the baud rates by trial and error and settled down for 2400 BPS. Though serial communication support upto 115KBPS it is interesting why IR using a lower speed. My guess is it would be more reliable to use lower speed, since less possibility IR signal loose 1 or 2 bits over the air. Decode IR remote keys Now the next step is to decode the key values. I used a standard Samsung TV IR remote for this effort. First important point is to figure out how many bytes of data for each key. It may vary 12-16 bytes. (the ones I tried). Usually byte length is same for all keys. Those bytes have header bytes, data bytes (to identify key) and tail bytes. The header bytes will have a signature for the model of the IR remote. I used an excel sheet to collect the key data values following Antzy Carmasaic page-... Deep diving into the captured key values, it shows byte 0-5 consists of header, repeated for all keys. Byte 6 to 11 data values represent the Key value. There could be some tail values. Byte 12 is tail for the samsung remote. Mapping keys The exact way for this remote is to store bytes 6-11 in an array and compare it with a new incoming key. Instead, I did a simple algorithm as follows. keyidentity = byte[6]+2*byte[7]+3*byte[8]+4*byte[9]+5*byte[10]+6*byte[11] It gives almost a unique value for every key. You can figure out a better algorithm than this. I extended the Python code to capture Samsung remote key information. Once I calculated mapped key value then I stored it the python program itself. File is attached. name - ir_serial3samsung.py. Samsung remote sends 2 sets of data. So I capture 24 bytes in order to flush the Raspberry Pi serial data capture buffer. But I use only 1st set to decode. When you run this code it correctly identifies the keys pressed. You can decode the rest of the keys in the remote by looking at value "keyidentity" that the program prints out. Then append the program to include them. Conclusion This is a very simple and effective way to use a remote control with Raspberry Pi with Python. You need to figure how many total bytes for a key, how long the header bytes, data bytes and tail bytes. Since you would know from A to Z of this process you can easily modify it to suit your application. Since these are small python codes it is very easy to debug if you hit any problem. Recommendations We have a be nice policy. Please be positive and constructive. 2 Comments Hello, thanks for the feedback. I stepped into one of Instructable page when I looked web for info on raspberry pi. That info was useful. So I thought of sharing this. So simple, so great! Thanks for sharing and welcome to the community!
http://www.instructables.com/id/Using-IR-Remote-with-Raspberry-Pi-without-LIRC/
CC-MAIN-2018-26
refinedweb
936
76.11
Using Google Maps Inside of Flash CS3: Part 3 If you’ve been with me over the past 2 tutorials, we made a basic Google Map application in Flash CS3 in part 1 and in part 2 we created a database full of locations plus made a service using AMFPHP that allows us to access that data. In this final part, we’re going to jump back into Flash and build upon that original map from part 1 and hook it up to the database from part 2. Let’s jump right in and look at what we’ll be building. Live version: Files: Download the example FLA. (Updated 12-17-2010) As I mentioned in part 1, I’m hosting this map on my business’s domain, jmx2.com, instead of SuperGeekery.com because SuperGeekery.com uses an security certificate, notice the https in front of the URL instead of the standard http, and Google Maps does not work with secure connections presently. (The only reason SuperGeekery has a security certificate is because I wanted to see if I could make it work. There is no other reason for it.) After looking at the example, you’ll notice some new elements added since part 1. - A text field component where a user will enter an address or zip code. - A button component for executing the search. - A data grid component. - A couple text fields, there for testing purposes, displaying longitude and latitude. - A text field for displaying feedback from the data request, displaying the number of results, or error messages, as needed. Let me show you around the application. Do some test searches. Type in your zip code. Did you find some locations? Try these. 123456 - This doesn’t return any Geo lookup from Google. Notice the feedback message shown. 62665 - The database returned no results. Notice the feedback message. 10003 - There are store results. Now the feedback displays how many store it found. Now that you’ve got some store results, they’re displayed in the datagrid. Click on one of them. See how the map recenters itself to that location? Nice. Click on any of the markers and you’ll see information on that location. Cool, right? Since the ActionScript is getting pretty long now, I will only cover some highlights in my post here, but I use many comments in the code that should answer most of your questions. The full ActionScript (which for this example all resides on frame 1 of the example file) will be included at the end of this post. If you downloaded the FLA, which I encourage you to do, you’ll see that the AS starts by importing many more chunks of code than before. You might be familiar with many of the normal Flash ones, but ones from Google might be completely new. The most useful one is probably the Geocode ones from Google. They’ll help us translate a street address or zip code into a longitude and latitude coordinate. The Marker and InfoWindow ones give us control of the identification pieces within the map. import com.google.maps.services.ClientGeocoder; import com.google.maps.services.GeocodingEvent; import com.google.maps.overlays.Marker; import com.google.maps.overlays.MarkerOptions; import com.google.maps.InfoWindowOptions; import com.google.maps.controls.ZoomControl; import com.google.maps.styles.FillStyle; import com.google.maps.styles.StrokeStyle; The next big update in the code is in the onMapReady function which we trigger, unsurprisingly, when we ‘hear’ the event of map becoming ready. In addition to moving the map further down on our stage than before, our updated onMapReady function now sets up our gets our geocoder function ready. We assign the variable ‘geocoder’ a new instance of a ClientGeocoder object. Next, we need to activate the search button, which is just a button component, by assigning an event listener that will trigger a doGeocode function. The doGeocode function will make the geocoder return either failure or success and we tell it what to do in each case. Finally, we wrap up the map being ready by creating the datagrid component and adding it to the stage. (You might be asking yourself why I create this component with code when I have other components on stage from the beginning. This comes from the final product I was creating which displayed an instruction screen first, then only displayed results in a datagrid when they were found. Basically, I only created the datagrid when needed and that is now part of this example.) geocoder = new ClientGeocoder(); searchbutton.addEventListener(MouseEvent.CLICK, doGeocode); geocoder.addEventListener(GeocodingEvent.GEOCODING_FAILURE, geoFailed); geocoder.addEventListener(GeocodingEvent.GEOCODING_SUCCESS, geoSuccess); makeDataGrid(); Let’s now check out the doGeocode function. It’s pretty simple. It takes the text that the user put in the text field on the screen and calls the geocode function on our geocoder instance. In the feedback text area, we’ll tell the user we’re searching the database. Lastly, just in case there is data in the datagrid from an previous search, we’ll get rid of it by removing all that old data. function doGeocode(event:Event):void { geocoder.geocode(address.text); feedbacktext.text = "searching…."; dg.dataProvider.removeAll(); } We have to write our functions on what to do when geocoder responds. The first one is easy. If it fails just tell the user it failed using the feedback text box. function geoFailed(event:GeocodingEvent):void { feedbacktext.text = "We couldn't find that location in the Google database of addresses."; } If geocoder responds with a success message, we’ve got some more work to do. First, we need to look inside the event’s response to see how many placemarkers were returned from the request to Google’s geocode service, if it is more than 1, we’ll take Google’s first response as the most likely to be correct. Before we place a marker on the map to pinpoint that place though, we should clear off the map in case this is not the first time the map is being used by calling the clearOverlays function on the map object. Now we’ll set the map’s center point to the first placemarker our event has. In this setCenter function, we also set the zoom level, which I’ve set to 11 in our example. To set a marker in this center position, we’ll use the Marker object from Google. Before we call the addOverlay function, which is what will make the marker visible on the map, set a listener for MapMouseEvent.CLICK to open a window that will show the the content of the variable we’ve called html, which the current text from the address text field on the screen. We’ll use the longitude and latitude text fields on the screen to display that information for our first placemark, using point.lat() and point.lng(), for us to see. You’ll probably want to just use variables for this later. I’m just displaying them here for testing purposes. Note: On 12-16-2010 I got a note from Amit asking me some questions about this example and on line 15 below, he helped me find that I had swapped mylat and mylng. They have been corrected below and in the FLA you can download from this page. Thanks, Amit! function geoSuccess(event:GeocodingEvent):void { if (event.response.placemarks.length > 0) { map.clearOverlays(); map.setCenter(event.response.placemarks[0].point, 11); marker= new Marker(event.response.placemarks[0].point); var html:String = "" + "Your location:" + " " + address.text; marker.addEventListener(MapMouseEvent.CLICK, function(event:MapMouseEvent): void {; map.openInfoWindow(event.latLng, new InfoWindowOptions({tailAlign: InfoWindowOptions.ALIGN_CENTER, contentHTML:html })); }); map.addOverlay(marker); } mylat.text = event.response.placemarks[0].point.lat(); mylng.text = event.response.placemarks[0].point.lng(); gw.call("GetLoc.getLocation", res, mylat.text, mylng.text, '25'); } Before we’re done with the onSuccess function, we have more more very important line of code to write. We need to access the AMFPHP code we wrote in part 2. We do this with a call to the AMFPHP gateway. In part 2 we wrote a service called GetLoc that had a function called “getLocation”. We pass it the longitude and latitude that we received from the call to the geocode function. The 25 here represents the radius we’re searching within for store locations from our database. The “res” in this line refers to the variable which holds the Responder object which we’ll set up next. If you’re checking out the full code from the example file, you’ll see you need to update your gateway information with your server’s information. This is covered in part 2. To set up your responder, define the variable and assign the functions to call upon a successful request to the database, onResult, or a failed request, onFault. var res:Responder = new Responder(onResult, onFault); In the onResult function definition, we need to deal with the results from the call to AMFPHP gateway. Figuring out how to do this is a little tricky. Unfortunately for us using CS3, the data is returned as an array collection. This is easy to deal with in Flex, but Flash, for some reason, doesn’t have the same easy way of dealing with this type of return. Luckily Lee Brimlow tells you exactly how to deal with this by using debugging in Flash. If you want the details be sure to watch his tutorial here. If you don’t want to know the details, just know that responds.serverInfo.initialData is where you need to look for your results. Using that information, we see how many results we have and respond accordingly and update the feedback text. To wrap up this function, we cycle through the results and add a marker to the stage for each one. While we’re cycling through the results, we also add each item to the datagrid and add a listener in case the user clicks on that datagrid item. function onResult(responds:Object):void { var theresults:Array = responds.serverInfo.initialData; // let's show the user how many results we found if (theresults.length > 0) { if (theresults.length == 1) { feedbacktext.text = "There is "+ theresults.length + " store in your area."; } else { feedbacktext.text = "There are "+ theresults.length + " stores in your area."; } } else { feedbacktext.text = "Sorry, but there were no stores in your area. Perhaps search for a larger town near where you live. "; } for (var j:uint=0; j<theresults.length; j++) { var storeName:String = theresults[j][1]; var storeAddress:String = theresults[j][0]; // add each item to our datagrid dg.addItem( { nameCol: storeName, addCol: storeAddress, lngCol: theresults[j][2], latCol: theresults[j][3], distCol: theresults[j][4] } ); // this Event.CHANGE operates like a Mouse CLICK event, and calls gridItemsSelected function to recenter the map with the lng / lat of the selected row's address dg.addEventListener(Event.CHANGE, gridItemSelected); createMarker(new LatLng(theresults[j][2], theresults[j][3]), storeName, storeAddress); } } function createMarker(latlng:LatLng, name:String, address:String):void { // the markers created in this function are for showing store locations, not for showing the initial address Google gave us. So let's change how they appear using the MarkerOptions Google gives us. var mylocalmarker= new Marker( latlng, new MarkerOptions({ strokeStyle: new StrokeStyle({color: 0x987654}), fillStyle: new FillStyle({color: 0x223344, alpha: 0.8}), radius: 9, hasShadow: true }) ); // these variables hold formatting information for the info windows var html:String = "<b>" + name + "</b> <br/>" + address; mylocalmarker.addEventListener(MapMouseEvent.CLICK, function(e:MapMouseEvent):void {; mylocalmarker.openInfoWindow(new InfoWindowOptions({fillStyle: { color: 0xFFFFFF, alpha: 0.9 }, tailAlign: InfoWindowOptions.ALIGN_CENTER, contentHTML:html})); }); map.addOverlay(mylocalmarker); } function onFault(responds:Object):void { map.setSize(new Point(445, 500)); for (var i in responds) { feedbacktext.text = "Error: Responder didn't work from your gateway and returned an error."; } } function gridItemSelected(e:Event) { mylng.text = "long: " + e.target.selectedItem.lngCol; mylat.text = "lat: " + e.target.selectedItem.latCol; // pan to center map on selected location map.panTo(new LatLng(e.target.selectedItem.lngCol,e.target.selectedItem.latCol)); } startMyMap(); Update: I’ve now posted all 3 tutorials for this series on SuperGeekery plus an update about Google releasing the component with official support for the Flash IDE: Here are the links for all parts: Part 1, Part 2, Part 3, and the Update Post. On Dec. 16, 2010, I got an email from Amit. He help spot an error I mentioned above about some swapped variables and those have been updated above. Another issue he had was the error: Error #2044: Unhandled NetStatusEvent:. level=error, code=NetConnection.Call.BadVersion Servers are not all set up the same way. I hadn’t seen this error before, but in an exchange of several email, he found a solution that might help out someone else that might help someone else following along here. In his gateway.php file he added: setLooseMode(true) That line solved the problem. If you’re having a similar error, give that a try. Cheers. This is great! Can’t wait to start using it. I’m trying to add a listener to the grid Items so when a user clicks an Item in the grid, not only does it center that Item it also shows the Info window for that Item. Any help would be greatly appreciated. COMMENT: Hey Bart, When I was building the store locator, that was on my “to do” list too. It fell off my list as my deadline approached though. If I dig back into it, I’ll post it here. If you get it working, please post a comment with your solution. I’m glad you’re excite by building your own. I hope this is helpful to you. Best, John COMMENT: John, I am having trouble using the map on some computers. Any Idea what might be causing this issue. All computers are XP sp3 with IE7 and all have the latest addition of flash play installed. It works fine on most computers but on some computers when I enter a zip code to search it just says, “Searching” and never loads the data. Like I say using the same zip code to search on most computers it works fine. Thanks for your help in advance. Bart COMMENT: Can you see what version of the Flash Player you have on all those machine? Are they all the same? Here’s Adobe’s link that will show you what Flash version a machine has: -John COMMENT: Yes they are all version WIN 10,0,12,36 COMMENT: Here is a link where I have it setup even though I am still in the testing stage it is live. COMMENT: I have a hunch. On the computers that you didn’t have it working on, did you type in the URL? Maybe: Instead of: I notice that both URLs are working, although they are technically different. If I use the www-less URL, I get the map, but “Searching…” never results in a result generated from the database. When I use the URL *with* the www, it works fine. COMMENT: Also, NICE WORK! I was very excited to see it working for you! That makes my day. COMMENT: John, you are brilliant! that was the problem. I didn’t even notice that because I was clicking on short cuts to get to the page. I still have a lot to do to get it to do exactly what I want it to do, but this is excellent work you’ve done here. I wish you would dig that back out and get back to work on it. sure would make my life easier. Thanks so much for your help! Bart COMMENT: Bart, glad I could help. One thing I see you accomplished was to make the info window pop up when you select the appropriate item in the data grid. What did you do there? COMMENT: Didn’t really accomplish anything yet. I’m still working on it. It’s really sloppy. If you click on a store location in the datagrid and then click back on the home location marker you’ll see what I mean. Also I’m trying to get the info window above the marker when clicking on the datagrid like it is when you click the marker. If you want I can email the fla to you if you like to take a look and see if you can figure out a better way of doing it. Oh, and how did you round the miles down to 2 decimal places? Thanks again for your help, Bart COMMENT: I wrote a function called “twoDec” which will take any number and return it to just 2 decimal places. Here that is: function twoDec(aLongNum:Number):Number { var times100:Number = aLongNum * 100; times100 = Math.round(times100); var div100:Number = times100/100; return div100; }
http://supergeekery.com/geekblog/comments/using_google_maps_inside_of_flash_cs3_part_3
CC-MAIN-2015-18
refinedweb
2,810
66.84
« DTrace at SCALE 4x | Main | DTrace at JavaOne... » and here, but I think deserve a little more discussion. Adding USDT probes (as described in the DTrace manual) requires creating a file defining the probes, modifying the source code to identify those probe sites, and modifying the build process to invoke dtrace(1M) with the -G option which causes it to emit an object file which is then linked into the final binary. Bart wrote up a nice example of how to do this. The mechanisms are mostly the same, but have been tweaked a bit. One of the biggest impediments to using USDT was its (entirely understandable) enmity toward C++. Briefly, the problem was that the modifications to the source code used a structure that was incompatible with C++ (it turns out you can only extern "C" symbols at the file scope -- go figure). To address this, I added a new -h option that creates a header file based on the probe definitions. Here's what the new way looks like: provider.d provider database { probe query__start(char *); probe query__done(char *); }; src.c or src.cxx ... #include "provider.h" ... static int db_query(char *query, char *result, size_t size) { ... DATABASE_QUERY_START(query); ... DATABASE_QUERY_DONE(result); ... } Here's how you compile it: $ dtrace -h -s provider.d $ gcc -c src.c $ dtrace -G -s provider.d src.o $ gcc -o db provider.o src.o ... If you've looked at the old USDT usage, the big differences are the creation and use of provider.h, and that we use the PROVIDER_PROBE() macro rather than the generic DTRACE_PROBE1() macro. In addition to working with C++, this has the added benefit that it engages the compiler's type checking since the macros in the generated header file require the types specified in the provider definition. One of the tenets of DTrace is that the mere presence of probes can never slow down the system. We achieve this for USDT probes by only adding the overhead of a few no-op instructions. And while it's mostly true that USDT probes induce no overhead, there are some cases where the overhead can actually be substantial. The actual probe site is as cheap as a no-op, but setting up the arguments to the probe can be expensive. This is especially true for dynamic languages where probe arguments such as the class or method name are often expensive to compute. As a result, some providers -- the one for Ruby, for example -- couldn't be used in production due to the disabled probe effect. To address this problem, Bryan and I came up with the idea of what -- for lack of a better term -- I call is-enabled probes. Every probe specified in the provider definition has an associated is-enabled macro (in addition to the actual probe macro). That macro is used to check if the DTrace probe is currently enabled so the program can then only do the work of computing the requisite arguments if they're needed. For comparison, Rich Lowe's prototype Ruby provider basically looked like this: rb_call(... { ... RUBY_ENTRY(rb_class2name(klass), rb_id2name(method)); ... RUBY_RETURN(rb_class2name(klass), rb_id2name(method)); ... } With is-enabled probes, Bryan was able to greatly reduce the overhead of the Ruby provider to essentially zero: rb_call(... { ... if (RUBY_ENTRY_ENABLED()) RUBY_ENTRY(rb_class2name(klass), rb_id2name(method)); ... if (RUBY_RETURN_ENABLED()) RUBY_RETURN(rb_class2name(klass), rb_id2name(method)); ... } When the source objects are post-processed by dtrace -G, each is-enabled site is turned into a simple move of 0 into the return value register (%eax, %rax, or %o0 depending on your ISA and bitness). When probes are disabled, we get to skip all the expensive argument setup; when a probe is enabled, the is-enabled site changes so that the return value register will have a 1. (It's also worth noting that you can pull some compiler tricks to make sure that the program text for the uncommon case -- probes enabled -- is placed out of line.) The obvious question is then "When should is-enabled probes be used?" As with so many performance questions the only answer is to measure both. If you can eke by without is-enabled probes, do that: is-enabled probes are incompatible with versions of Solaris earlier than Nevada build 38 and they incur a greater enabled probe effect. But if acceptable performance can only be attained by using is-enabled probes, that's exactly where they were designed to be used. Technorati Tags: DTrace USDT Posted at 10:30PM May 08, 2006 by ahl in DTrace | Permalink Posted by Matty on May 09, 2006 at 06:35 AM PDT # Posted by Sted on May 18, 2006 at 08:02 AM PDT # Posted by Adam Leventhal on May 19, 2006 at 04:24 PM PDT # Today's Page Hits: 785
http://blogs.sun.com/ahl/entry/user_land_tracing_gets_better
crawl-002
refinedweb
797
69.62
PDL::PrimaImage - interface between PDL scalars and Prima images Converts a 2D or 3D PDL scalar into Prima image and vice versa. use PDL; use Prima; use PDL::PrimaImage; my $x = byte([ 10, 111, 2, 3], [4, 115, 6, 7]); my $i = PDL::PrimaImage::image( $x); $i-> type( im::RGB); $x = PDL::PrimaImage::piddle( $i); Converts 2D or 3D piddle into Prima image. The resulting image pixel format depends on the piddle type and dimension. The 2D array is converted into one of im::Byte, im::Short, im::Long, im::Float, or im::Double pixel types. For 3D arrays each pixel is an array on values. image accepts piddles with tuple and triple values. For tuple, the resulting pixel format is complex ( with im::ComplexNumber bit set), where each pixel contains 2 values, either float or double, correspondingly to <im::Complex> and im::DComplex pixel formats. For triple values, im::RGB pixel format is assumed. In this format, each image pixel is represented as a single combined RGB value. To distinguish the degenerate cases, like ([1,2,3],[4,5,6]), where it is impossible to guess whether the piddle is a 3x2 gray pixel image or a 1x2 RGB image, %OPTIONS hash is used. When either rgb or complex boolean value is set, image assumes the piddle is a 3D array. If neither option is set, image favors 2D array semantics. NB: These hints are neither useful nor are checked when piddle format is explicit, and should only be used for hinting an ambiguous data representation. Converts Prima image into a piddle. Depending on image pixel type, the piddle type and dimension is selected. The following table depicts how different image pixel formats affect the piddle type: Pixel format PDL type PDL dimension -------------------------------------------- im::bpp1 byte 2 im::bpp4 byte 2 im::bpp8 byte 2 im::Byte byte 2 im::Short short 2 im::Long long 2 im::Float float 2 im::Double double 2 im::RGB byte 3 im::Complex float 3 im::DComplex double 3 im::TrigComplex float 3 im::TrigDComplex double 3 Image in pixel formats im::bpp1 and im::bpp4 are converted to im::bpp8 before conversion to piddle, so if raw, non-converted data stream is needed, in correspondingly 8- and 2- pixels perl byte format, raw boolean option must be specified in %OPTIONS. In this case, the resulting piddle width is aligned to 4-byte boundary. Prima image coordinate origin is located in lower left corner. That means, that an image created from a 2x2 piddle ([0,0],[0,1]) will contain pixel with value 1 in the upper right corner. See Prima::Image for more. The symbol is contained in Prima toolkit. Include 'use Prima;' in your code. If the error persists, it is probably Prima error; try to re-install Prima. If the problem continues, try to change manually value in 'sub dl_load_flags { 0x00 }' string to 0x01 in Prima.pm - this flag is used to control namespace export ( see Dynaloader for more ). Dmitry Karasik, <dmitry@karasik.eu.org>. PDL-PrimaImage page, The Prima toolkit, PDL, Prima, Prima::Image.
http://search.cpan.org/~karasik/PDL-PrimaImage-1.02/primaimage.pd
CC-MAIN-2014-52
refinedweb
517
53.92
0 I'm trying to teach myself C++ through internet tutorials and such. Right now I'm trying to write a program that find the diameter and radius of a circle if the circumference is entered, and does the same if one of the other two measurements is entered instead. I'm using switch/case to do this. It won't even compile right, so help me find out what I did wrong, please. I'm hoping it's something basic that will make me put my face through the desk when someone points it out, rather than an error with the design, which is also very possible. :\ #include <iostream> using namespace std; float radius ( float r, float d, float c ); float diamter ( float r, float d, float c); float circumference ( float r, float d, float c); int main() { float r, d, c; int In; cout<<"Enter one of a circle's measurements. \n"; cout<<"1 - radius \n"; cout<<"2 - diameter \n"; cout<<"3 - circumference \n"; cin>> In; switch ( In ) { case 1: radius(); case 2: diameter(); case 3: circumference(); default: cout<<"Bad input. \n"; } cin.get(); } float radius ( float r, float d, float c ) { cin>> r; return d * 2 * r, 2 * 3.14 * r; } float diameter ( float r, float d, float c) { cin>> d; return d / 2, 3.14 * d; } float circumference (float r, float d, float c ) { cin>> c; return c / 3.14, c / 2 / 3.14 }
https://www.daniweb.com/programming/software-development/threads/193654/c-finding-measurements-of-circle
CC-MAIN-2016-50
refinedweb
238
76.56
Here is very simple example of how it could work. You would start this and call it with this, for example: curl -i '' -X POST -d '{"Foo":"Bar"}' HTTP/1.1 200 OK Some response% It is missing tons of things, but this should at least give you some sort of idea. import SocketServer class MyTCPHandler(SocketServer.BaseRequestHandler): def handle(self): self.data = self.request.recv(1024).strip() print self.data self.parse_request(self.data) func, args = self.path.split("/", 1) args = args.split("/") resp = getattr(self, func)(*args) self.request.sendall("HTTP/1.1 200 OK ") self.request.sendall(" ") self.request.sendall(resp) def parse_request(self, req): headers You've got multiple problems here. The first is that you're printing bytes objects directly: print(name,"wrote:".format(self.client_address[0])) That's why you get b'Bob' wrote: instead of Bob wrote:. When you print a bytes object in Python 3, this is what happens. If you want to decode it to a string, you have to do that explicitly. You have code that does that all over the place. It's usually cleaner to use the decode and encode methods than the str and bytes constructors, and if you're already using format there are even nicer ways to deal with this, but sticking with your existing style: print(str(name, "utf-8"), "wrote:".format(self.client_address[0])) Next, I'm not sure why you're calling format on a string with no format parameters, or why you're mixing multi-argument pr # Do shutdown() if threaded, else execute server_close() if self.sysconf.threaded: self.socket.server.shutdown() else: self.socket.server.server_close() return (True,"") self.sysconf is the configuration for the daemon (conf in the code in the question), and self.socket is a reference to the stream handler. The problem is that when Tkinter catches a key event, it triggers the more specific binding first (for example 'Key-Up'), and the event is never passed to the more general binding ('Key'). Therefore, when you press the 'up' key, KeyUp is called, but transmit is never called. One way to solve this would be to just call transmit() within all the callback functions (KeyUp, KeyDown, etc). For example, KeyUp would become def KeyUp(event): Drive = 'forward' drivelabel.set(Drive) labeldown.grid_remove() labelup.grid(row=2, column=2) transmit() Then you can get rid of the event binding to 'Key'. Another option would be to make "Drive" and "Steering" into Tkinter.StringVar objects, then bind to write events using "trace", like this: Drive = tk.StringVar() Drive.set('idle I had same problem, but no tornado, no mysql. Do you have one database connection shared with all server? I created a multiprocessing.Pool. Each have its own db connection provided by init function. I wrap slow code in function and map it to Pool. So i have no shared variables and connections. Sleep not blocks other threads, but DB transaction may block threads. You need to setup Pool at top of your code. def spawn_pool(fishes=None): global pool from multiprocessing import Pool def init(): from storage import db #private connections db.connect() #connections stored in db-framework and will be global in each process pool = Pool(processes=fishes,initializer=init) if __name__ == "__main__": spawn_pool(8) from storage import db #shared connection for Multiple files with different key values can be uploaded by adding multiple dictionary entries: files = {'file1': open('report.xls', 'rb'), 'file2': open('otherthing.txt', 'rb')} r = requests.post('', files=files) Seems like you have SocketServer.py somewhere in python path. Check using following command: python -c "improt SocketServer; print(SocketServer.__file__)" Renaming that file will solve your problem. UPDATE Rename the file /Users/ddl449/Projects/visualization/SocketServer.py. If there is /Users/ddl449/Projects/visualization/SocketServer.pyc, remove that file. To do this, you'd need your client to upload in mime/multipart format. I don't know PHP, but I'm sure there's a library out there that will support receiving/parsing the multipart messages you get. As for whether it's a good idea .. If initiating the request is the creation of a single resource, it's not unreasonable to accept mime/multipart. If the parts being sent are themselves full-fledged resources, it would probably be better to make the client send them up separately, and reference them in the initiation request. Also note that mime/multipart is going to be a bit harder for your clients to deal with than simple requests. This post seems to be related to what you're trying to accomplish. Since both classes are in the same process (and hence same memory space), why not just use a shared data. If you are worried about data being overwritten by threads, then you can add a lock on that data. You could also consider making the second class an instance of the first one -- in that case, sharing data would be seamless. Here is a discussion, you might find useful: How to share data between two classes Try this (you can remove the lock, if you have only one client and it is okay to be overwritten): lock = threading.Lock() data_title = "" class Network(Tk): def __init__(self,server): Tk.__init__(self) self._server=server t = threading.Thread(target=self._server.serve_forever) t.setDaemon(True) # don't hang on exit t.start().. Passing NetworkCredential to HttpWebRequest in C# from ASP.Net Page This link is showing NetworkCache, this may be a solution for you. You can easily refactor your code to collapse the 15 requests in one by using pipelines (which redis-rb supports). You get the ids from the sorted sets with the first request and then you use them to get the many keys you need based on those results (using the pipeline) With this approach you should have 2 requests in total instead of 16 and keep your code quite simple. As an alternative you can use a lua script and fetch everything in one request. You can use the same file for multiple requests. You can supply parameters along with the AJAX request, either by including them in the URL after ? (they'll be available in $_GET and $_REQUEST) or by using the POST method and sending them as form data (they'll be available in $_POST and $_REQUEST). You can use the Javascript FormData API to encode this properly; see the documentation here. Using the jQuery library can simplify all of this. One of the parameters can then be a command or operation code, and the script can take different actions based on this. Well I`m not sure what are you asking, but easyphp is just apache server so you can make as many virtual hosts as you wish... so that way you can make one script1/ and second script2/ So that way if you write to your browser script1/ you will be running first website... and you can on other tab write script2/ and run second website in same time... you can look here: Working on multiple sites with an easyPHP offline server or just google apache virtual host.... There are much better ways of doing this. The problem with what you are trying to do is that it is synchronous, which means your app will have to wait for this action to be completed before it can do anything else. I definitely would recommend looking into making this into an asynchronous call by simply using NSURLConnection and NSURLRequests, and setting up delegates for them. They are relatively simple to set up and manage and will make your app run a million times smoother. I will post some sample code to do this a little later once I get home. UPDATE First, your class that is calling these connections will need to be a delegate for the connections in the interface file, so something like this. ViewController.h @interface ViewController: UIViewController <NSURLConnectionDele The problem is that when acts on deferred objects, however sub doesn't return anything so when fires right away. So what you need to do is to collect all the deferred objects returned by the ajax calls and return them: var order = []; function sub(selector){ var deferredList = [] selector.each(function(){ var out = { "some":"random", "stuff":"here" }; var deferred = $.ajax({ type: "POST", url: "/test/url", dataType: 'json', contentType: "application/json; charset=utf-8", data:JSON.stringify(out), success:function(response){ $(this).attr("data-response",response); order.push(response);. I would recommend you to use the while just for accepting clients. Create a new thread where you handle everything related to Socket remote. The thread should also have a while loop, where it reads from remotes InputStream and don't close the socket right after creation. Then you can see what exactly your browser sends. Because the way it is right now. You close the socket to the browser right after creation. I am assuming by web method you mean a method in your code that a servlet container like tomcat's catalina would map a HTTP request to. Tomcat tries to service each request in its own thread and I would assume these threads would eventually get run on the sole instance of the singleton object that has the web method. The maxThreads attribute in server.xml can set a limit on how many such threads would get spawned at a time. After further investigation is seems that Restangular does not implement a feature to limit requests for same resource. The number of requests that go out to the server is dependent on the browser: Chrome only sends out only one GET request, IE 10 sends out two. It's at least partially because each new XMLHttpRequest is being set to the same global x, which can only keep 1 of them. This means later references to x.readState and x.responseText aren't always referring to the "correct" instance. You'll want to declare x when or before setting it so it's scoped and unique to each Ajax request: var x = new XMLHttpRequest(); For more info, see Difference between using var and not using var in JavaScript. Of course , the $row variable doesn't exist in the function. You need to pass it as a parameter. function action($row) { ...code... //Do something with $row } foreach($result1 as $row) { action($row); } foreach($result2 as $row) { action($row); } Considering you have roughly 7mbit/s (1MB/s counting high). If you get 2.888 pages per second (10'400 pages per hour). I'd say you're maxing out your connection speed (especially if you're running ADSL or WiFi, you're hammering with TCP connection handshakes for sure). You're downloading a page roughly containing 354kB of data in each of your processes, which isn't half bad considering that's close the the limit of your bandwidth. Take in account for TCP headers and all that happens when you actually establish a connection (SYN, ACK.. etc) You're up in a descent speed tbh. Note: This is just to take in account the download rate which is much higher than your upload speed which is also important considering that's what actually transmits your connection request, headers to the web server I would not implement the cache on the (PHP) application level. REST is HTTP, therefore you should use a caching HTTP proxy between the internet and the web server. Both servers, the web server and the proxy could live on the same machine as long as the application grows (if you worry about costs). I see two fundamental problems when it comes to application or server level caching: using memcached would lead to a situation where it is required that a user session is bound to the physical server where the memcache exists. This makes horizontal scaling a lot more complicated (and expensive) software should being developed in layers. caching should not being part of the application layer (and/or business logic). It is a different layer using specialized components. And as there are well kn you will have to use Proxy while opening a connection... the use of proxy provides always a new IP address for the server so you can be sure that the server maintains different session for each request... your code will be something like following... CookieHandler.setDefault(new CookieManager(null, CookiePolicy.ACCEPT_ALL)); URL url = new URL(""); HttpURLConnection connection = (HttpURLConnection) url.openConnection(new Proxy("some_proxy")); I redesigned the way I do my context now. I have my context then I implement IDbContextFactory<TContext> called DefaultContextFactory<MyContext> and I inject them. In the Repository I have in the public constructor _context = contextFactory.Create();. Then throughout the repository i just use _context.WhatEver() and its fine. I also did in the ModuleLoader Bind<IRepository>().To<DefaultRepository>().InTransientScope() in order to make every call to it create a new repository! I don't need a repository factory because I only have one repository! To my knowledge, Spyne does the right thing there and the request is incorrect. Child elements are always under parent's namespace. The children to those elements can be in own namespace. <a:Foo xmlns: <a:Baz xmlns: <a:Bar>blah</a:Bar> </a:Foo> That said, you can just use the soft validator which doesn't care about namespaces. Running rails s on a project only allows you to handle one web request at a time. However, you should not be using this in production. In production, use something like Passenger which automatically start multiple processes as needed so that you don't have to wait on a process to complete another user's action. Hope that helps! I suggest using Google Volley. It's a great and simple library for networking and remote image loading. Volley "hides" the whole threading issue in its core, all you have to do is issue a request. No need to manage AsyncTasks. Check it out. See here: E.g. if 'requests' is a directory, which has __init__.py, Python executes this file each time it sees from requests import ... or import requests. See more in Modues. You're sending params, not data: p = requests.post(token_url, params = data) When you pass a dictionary as a params argument, requests tries to send it as part of the query string on the URL. When you pass a dictionary as a data argument, requests will form-encode it and send it as the POST data, which is the equivalent to what curl's -F does. You can verify this by looking at the request URL. If print(p.url) shows something like…, that means your parameters ended up on the URL instead of in the post data. See Putting Parameters in URLs and More complicated POST requests in the quick-start documentation for full details. For more complicated debugging, you may want to consider pointing bot Doesn't look like it can be done any longer. Every URL gets passed through requote_uri in utils.py. And unless I'm missing something, the fact this API wants JSON with spaces in a GET parameter is a bad idea.
http://www.w3hello.com/questions/SocketServer-Python-with-multiple-requests
CC-MAIN-2018-17
refinedweb
2,520
65.52
Bowling From HaskellWiki Revision as of 10:19, 30 April 2007 by ChrisKuklewicz (Talk | contribs) 1 The problem Convert a list of number of pins knocked over by each ball into a final score for a game of ten pin bowling. 2 Solution by Eric Kidd On his blog. This is a recursive solution. 3 Short Solution by Chris Kuklewicz import Control.Monad.State score = sum . evalState (replicateM 10 (State scoreFrame)) where scoreFrame (10:rest@(a:b:_)) = (10+a+b,rest) scoreFrame (x:y:rest) | x+y < 10 = (x+y,rest) | otherwise = (10+head rest,rest) score2 = fst . (!!10) . iterate addFrame . (,) 0 where addFrame (total,10:rest@(a:b:_)) = (total+10+a+b ,rest) addFrame (total,x:y:rest) | x+y < 10 = (total+x+y ,rest) | otherwise = (total+x+y+head rest,rest) 4 Solution by Chris Kuklewicz Listed here. This has a monad-using parser with inferred type: StateT [Int] (Error String) Frame The constraint that there are 10 frames is declared by using (replicateM 10). All invalid input lists should be recognized. module Bowling(Frame(..),Game(..),toGame,toBalls,score) where import Control.Monad(replicateM,when) import Control.Monad.State(get,put,evalStateT,StateT(..)) import Control.Monad.Error(throwError) import Data.Array.IArray(Array,(!),elems,listArray,inRange) -- | Representation of a finished Game of bowling data Game = Game { gameScore :: Int , tenFrames :: Array Int Frame } deriving (Show,Eq) -- | Compact representation of a Frame from a finished Game data Frame = Normal { frameScore, first, second :: Int} | Spare { frameScore, first :: Int} | Strike { frameScore, next :: Int} deriving (Show,Eq) -- | Convert a list of pins to a final score score :: [Int] -> Int score balls = case toGame balls of Left msg -> error msg Right g -> gameScore g -- | Convert a Game to a list of (list of pins) for each frame toBalls :: Game -> [[Int]] toBalls (Game {tenFrames = frames}) = foldr (:) final (map decode (elems frames)) where decode (Normal {first=x,second=y}) = [x,y] decode (Spare {first=x}) = [x,10-x] decode (Strike {}) = [10] final = case (frames ! 10) of Normal {} -> [] Spare {frameScore=s} -> [[s-10]] Strike {frameScore=s,next=10} -> [[10],[s-20]] Strike {frameScore=s,next=a} -> [[a,s-a-10]] -- | Try to convert a list of pins to a Game toGame :: [Int] -> Either String Game toGame balls = do frames <- parseFrames balls return $ Game { gameScore = sum (map frameScore frames) , tenFrames = listArray (1,10) frames } -- This will only return an error or a list of precisely 10 frames parseFrames balls = flip evalStateT balls $ do frames <- replicateM 10 (StateT parseFrame) remaining <- get case (last frames,remaining) of (Normal {} , []) -> return frames (Spare {} , _:[]) -> return frames (Strike {} , _:_:[]) -> return frames _ -> err balls "Too many balls" parseFrame balls@(10:rest) = do case rest of (a:b:_) -> do checkBalls [a,b] when ((a /= 10) && (a+b > 10)) $ err balls "More than 10 pins in frame after a strike" return (Strike (10+a+b) a,rest) _ -> err balls "Too few balls after a strike" parseFrame balls@(x:y:rest) = do checkBalls [x,y] case compare (x+y) 10 of GT -> err balls "More than 10 pins in a frame" LT -> return (Normal (x+y) x y,rest) EQ -> case rest of [] -> err balls "No ball after a spare" (a:_) -> do checkBall a return (Spare (10+a) x,rest) parseFrame balls = err balls "Not enough balls" err balls s = throwError (s ++ " : " ++ show (take 23 balls)) checkBalls = mapM_ checkBall checkBall x = when (not (inRange (0,10) x)) (throwError $ "Number of pins is of out range (0,10) : " ++ show x)
http://www.haskell.org/haskellwiki/index.php?title=Bowling&direction=prev&oldid=32483
CC-MAIN-2014-35
refinedweb
579
53.99
Product of array elements except for the current index in Java Hi coders! today we are going to see an Arraylist program where we have to take an array as input and print the product of all elements except the element of current index. This question is very interesting as it is asked in many company placements rounds. So let’s start learning how to find the product of array elements except for the current index in Java. Product of array elements except for the current index in Java For a better understanding of the question, let’s take an array, Size=6 Input array=5, 4, 6, 3, 2, 10 Output array= 1800, 1440, 1200, 2400, 3600, 720 Explanation:- Leave the element at the current index and provide the product of the rest of the element of the given array:- 1800=5*6*3*2*10 1440=4*6*3*2*10 1200=4*5*3*2*10 : Here the Naive solution that will come into your mind is to solve it by nesting the For loop, one loop is to go to the particular index and second is to start multiplying all elements from the next index till the end, and print the product. But nesting the loops will make time complexity O(n²), which is not an ideal situation. Another way of solving this is to find the product of all the numbers and then divide this product by the number at the current index and print the quotient. It is not the best solution but better than naive solution as its time complexity is O(n). Java code to find the product of array elements except for the current index in Java import java.util.ArrayList; import java.util.List; import java.util.Scanner; public class product_array { public static void main(String[] args) { Scanner sc= new Scanner(System.in); List<Integer> input=new ArrayList<Integer>(); int size=sc.nextInt(); for(int i=0;i<size;i++) //input the array { input.add(sc.nextInt()); } int product=1; for(int i=0;i<size;i++) //take product of all number { product*=input.get(i); } for(int j=0;j<size;j++) //divide the specific number and print the rest { System.out.println(product/input.get(j)); } } } OUTPUT:- Enter the size of input array 6 Enter the array element 4 5 6 3 2 10 Output array:- 1800 1440 1200 2400 3600 720 Hope you understand the code. Also read:
https://www.codespeedy.com/product-of-array-elements-except-for-the-current-index-in-java/
CC-MAIN-2021-43
refinedweb
411
50.67
#include <db.h> int DBcursor->cmp(DBC *DBcursor, DBC *other_cursor, int *result, u_int32_t flags); The DBcursor->cmp() method compares two cursors for equality. Two cursors are equal if and only if they are positioned on the same item in the same database. The DBcursor->cmp() method returns a non-zero error value on failure and 0 on success. The other_cursor parameter references another cursor handle that will be used as the comparator. If the call is successful and both cursors are positioned on the same item, result is set to zero. If the call is successful but the cursors are not positioned on the same item, result is set to a non-zero value. If the call is unsuccessful, the value of result should be ignored. The DBcursor->cmp() method may fail and return one of the following non-zero errors: Database Cursors and Related Methods
https://docs.oracle.com/cd/E17276_01/html/api_reference/C/dbccmp.html
CC-MAIN-2019-47
refinedweb
146
62.58
Libra First Impressions Part 2: Deep dive with a Libra transaction This is the second article in a 3-article series on the technology behind Libra. The first article is an overview of Libra and its smart contract programming language Move. In this piece, we will deep dive into the technology stack and use the source code to show what happens behind the scene when users interact with Libra. Taking “getting Libra coins” as an example, let’s examine how Libra Client and Validator Faucet process and run a transaction. This article is researched by Hydai from Second State, a VC-funded, enterprise-focused, smart contract platform company. We are still in stealth mode while making contributions to leading open source projects. We are launching our first products soon. Interacting with Libra We can explain the relationship between Libra Client and Validator through the following diagram from the Libra Technical White Paper. Libra has two categories of blockchain participants: Client (User) and Validators (LibraBFT consensus nodes). An average user needs to submit the transactions and database queries to Validators through the Client. Request Libra Coin from the faucet - Start Libra client and wait for Libra CLI to be connected to testnet. Libra CLI shows libra%and is now ready for user commands. - When a user enters account mint <MyAddr> <NumberOfCoins>in Libra CLI to mint some coins, Libra CLI will package the current sender <MyAddr>, the minting operation’s transaction script ( libra/language/stdlib/transaction_scriptsmint.mvir), and the quantity to be requested into a transaction (see the reference part below for the transaction format). - The Libra client submits this transaction to the validator after signing it. - When the Validator receives the transaction, it will be placed in the mempool and shared with other Validators. - Validators’ governance design is LibraBFT: Each validator takes turns to act as a leader. The leader picks up the transactions he wants to execute from mempool into the proposed block and broadcasts it to other validators. Wait until 2f+1 validators to vote for this proposed block, the Leader will make a quorum certificate and broadcast it to all validators. - At this moment the transactions in this proposed block will be executed and committed on the versioned database. How does the validator execute a Transaction Libra designed a Move VM specifically for Move Language. When a validator executes a transaction, it uses the Move VM! Let’s see how. There are 6 steps: Check signature, prologue, verify, publish module, run the transaction script, and finally, epilogue. Check Signature: Check if the Transaction Signature matches the Transaction Data and the sender’s public key. At this stage, it just verifies the transaction details. It has not yet interacted with the versioned database and Move VM. Run Prologue: There are three checks in the following order: - Check if the sender public key in the Transaction is the same as the authentication key held by the sender account in the database. HASH(Sender Public Key) == SenderAddr.LibraAccount.T.AuthenticationKey - Check if Sender has enough Libra Coin to pay for Gas Fee TX.gasPrice * TX.maxGasAmount <= SenderAddr.LibraAccount.T.balance - Check if the sequence number of the transaction matches the sequence number of the sender account in the current database. TX.SequenceNumber == SenderAddr.LibraAccount.T.SequenceNumber All checks are performed by executing the procedure prologue() of built-in module 0x0.LibraAccount. Since the prologue() work is required by the system, Libra does not charge a gas fee on the 0x0.LibraAccount.prologue() procedure, even as prologue() is executed by the Move VM. Verify Transaction Script and Module: Security is a top priority in designing Move. To ensure security, the VM uses the Move bytecode verifier to check the transaction script or deployed module to ensure the security of type, reference, and resource. Publish Module: IF there is a module to be deployed, then deploy module to the sender address at this stage. Run transaction script: The Move VM ties transaction parameters and the arguments corresponding to this transaction script. Then run the script. If successful, events will be generated and the result will be written back to the global state. If it fails (including out of gas, execution error, etc.), the modification to the global state will be reverted. Run Epilogue: Success or failure, the VM will execute this step for all transactions. The Move VM calls the built-in module 0x0.LibraAccount’s procedure epilogue(): - Charge the gas fee: SenderAddr.LibraAccount.T.Balance -= TX.gasPrice * TX.gasUse - Adjust the sequence number: SenderAddr.LibraAccount.T.SequenceNumber += 1. Similar to the prologue(), even though LibraAccount.epilogue() is run via Move VM, the sender won’t be charged any gas fee for the epilogue() itself. A concrete example Now we have seen how the transaction is processed by the validators. Let’s use a complete example to demonstrate how to mint or create some Libra coins on the testnet. The process starts from the Libra CLI (Command Line Interface). When we start the Libra CLI, it goes through the following process to initialize the client and allow the user to enter commands to interact with Libra testnet. In this section, let’s review how the CLI processes the account mint <address> <number of coin> command. Before the user sees the input prompt of libra %, the Libra CLI loads the genesis, faucet account, local account and other information from the configuration file, and starts the ClientProxy. In subsequent operations, the user’s instructions are wrapped by ClientProxy, the corresponding transaction being composed and sent to Libra’s validator. LibraCommand When the user wants to request 100 libra coins for their first account, he enters account mint 0 100 and the Libra CLI will parse the entire input string and send it to LibraCommand for executing. The account keyword in the string means that command will call LibraCommandAccount::execute() to execute mint 0 100 . When LibraCommandAccount identifies keyword mint in the command, it calls subcommand LibraCommandAccountMint::execute() with 0 100. ClientProxy But AccountCommandMint::execute() does not directly send a transaction to the libra validator. Instead, it goes through the Libra CLI’s most important element: ClientProxy. Let’s take a look at the process! AccountCommandMint::execute() sends 0 (the first account), 100 (the number of Libra coins) to ClientProxy::mint_coins(). The ClientProxy will first confirm whether there is a faucet account on the local machine. If the local machine does not have a faucet account, ClientProxy calls the mint_coins_with_faucet_service() method and wraps mint 0 100 into a url ( https://<faucet server>?amount=100&account=0) to send the request to a remote faucet server. When there is a local faucet account, ClientProxy uses a completely different path to execute the mint request and call ClientProxy::mint_coins_with_local_faucet_account() instead. At this point, a Libra transaction will be created: - The vm_genesis::encode_mint_program()method loads the mint.mvirtransaction script written in the Move language under language/stdlib/transaction_scripts. - The ClientProxythen calls ClientProxy::create_submit_transaction_req()to include information such as transaction script, sender address, gas price, max gas amount into the Libra transaction. - Finally, the ClientProxyuses GRPCClient::submit_transaction()to send this transaction to the validator. If the user requires the transaction to complete before returning account mint 0 100, ClientProxy will call wait_for_transaction() and display “waiting …” until the transaction is completed. What’s next In this article, we demonstrated how Libra processes a transaction through a review of its source code. In the next article in this series, we will create new Libra modules to support our own coins on the Libra blockchain. Stay tuned! References Transaction Format Libra defines in detail that a Transaction should have the following fields: - Sender Address: The sender’s address that will be used to query the Libra account for this Address in the ledger. - Sender Public Key: The sender’s public key, which will be used to verify that this Transaction is signed by Sender; and to check if this public key matches the authentication key retained by LibraAccountin Address. - Program: Move Module or Transaction Script. - Gas Price: The gas price for this transaction. - Max Gas Amount: The Gas limit for this transaction. - Sequence Number: This needs to match the LibraAccount.T.SequenceNumberin the current Address. It is the verification field used to check for attacks such as a replay attack. Libra Storage Layout Each Address has a Module section and a Resource section. Each Address can have multiple Modules. The only restriction is that there can only be one Module with the same name in the Address. In the following figure, for example, 0x0 already has the Account module. When the user tries to publish another Account module to 0x0, the transaction would fail with an error. In Libra, each Address has an independent namespace. Therefore if a module is deployed in different addresses, each module deployment has a different name. For example, in the figure below, even though 0x0 and 0x4 have the same account module, 0x0.Account.T {…} and 0x4.Account.T {…} are entirely different resources.
https://medium.com/hackernoon/libra-first-impressions-ab98b831fefe?source=rss-10e8b71566e------3
CC-MAIN-2019-39
refinedweb
1,496
55.64
In this short article, we're going to be taking a look at some of the ways we can run functions before and after a request in Flask, using the before_request and after_request decorators and a few others. We'll start out with a very basic Flask application: from flask import Flask app = Flask(__name__) @app.route("/") def index(): print("index is running!") return "Hello world" if __name__ == "__main__": app.run() before_request The before_request decorator allows us to create a function that will run before each request. We can use it by decorating a function with @app.before_request: @app.before_request def before_request_func(): print("before_request is running!") Adding this function to our application and making a request to the route /, we get the following output in the terminal: before_request is running! index is running! before_request functions are ideal for tasks such as: - Opening database connections - Working with the flask g object Functions decorated with before_request are not required to return anything, however - If a before_request function returns a non None value, it will be handled as if it was the return value for the view and any further request handling is stopped. For example: @app.before_request def before_request_func(): print("before_request is running!") return "Intercepted by before_request" If you were to make any requests or go to any route in your application, you'd see Intercepted by before_request as the return value. Let's import session and g and work with them in the before_request function: from flask import session, g To work with the session object, we'll need to assign a secret_key to our app: app.secret_key = "MySecretKey1234" For the sake of this example, we'll just assign a key & value to the session object and set an attribute username on the g object. You would typically use a function like this to: - Get a unique user ID from the session - Fetch the user from a database - Assign the user to the g object Here's the example: @app.before_request def before_request_func(): session["foo"] = "bar" g.username = "root" print("before_request is running!") Running the app and accessing our route gives us the same output as before, so let's access these values from our route: @app.route("/") def index(): username = g.username foo = session.get("foo") print("index is running!", username, foo) return "Hello world" Running the app and accessing our route, we see the following output in the terminal: before_request is running! index is running! root bar We're able to access session["foo"] and g.username which we set in before_request as it's available in the context of the request. before_first_request Functions decorated with @app.before_first_request will run once before the first request to this instance of the application: @app.before_first_request def before_first_request_func(): print("This function will run once") Running the app and making a couple of requests, we see the following output: This function will run once before_request is running! index is running! root bar 127.0.0.1 - - [10/Apr/2019 13:42:10] "GET / HTTP/1.1" 200 - before_request is running! index is running! root bar 127.0.0.1 - - [10/Apr/2019 13:42:12] "GET / HTTP/1.1" 200 - As you'll notice, the before_first_request fucntion only ran once before the very first request to the app and is ignored on subsequent requests. Depending on your application, you may want to use a before_first_request function to do some database maintenance or any other task that only needs to happen once. after_request Functions decorated with after_request work in the same way as before_request, except they are run after each request. An important thing to note is that any after_request functions must take and return an instance of the Flask response class. Here's a simple example: @app.after_request def after_request_func(response): print("after_request is running!") return response Running our app and making a request to our route, we see: before_request is running! index is running! root bar after_request is running! after_request functions will be run after every request and provide access to the request context, meaning we still have access to ssession and g. For example: @app.after_request def after_request_func(response): username = g.username foo = session.get("foo") print("after_request is running!", username, foo) return response At this point you might be thinking; This is a great place to do something like close a database connection, and you'd be right. However something to note is that any functions decorated with after_request will NOT run if your application throws an exception. We can illustrate this by raising an exception in our route: @app.route("/") def index(): raise ValueError("after_request will not run") username = g.username foo = session.get("foo") print("index is running!", username, foo) return "Hello world" Running our app and accessing our route, we see: This function will run once before_request is running! 127.0.0.1 - - [10/Apr/2019 14:02:17] "GET / HTTP/1.1" 500 - Traceback (most recent call last): # etc ... As expected, both our before_first_request and before_request functions ran, however raising the ValueError in our route brought our application to a halt and after_request didn't run. Fortunately, we can get around this with teardown_request. teardown_request Functions decorated with teardown_request behave similarly to after_request functions, however, they have the added benefit of being triggered regardless of any exceptions raised. This makes teardown_request functions a great place to do any cleanup operations after requests, as we know these function will always run. Note - In debug mode, Flask will not tear down a request on exception immidiately. From the Flask docs: an exception it will be passed an error object. @app.teardown_request def teardown_request_func(error=None): print("teardown_request is running!") if error: # Log the error print(str(error)) Due to the nature of how Flask handles teardown_request in debug mode, we'll run export FLASK_ENV=production to see our function in action. Running our app without raising the ValueError in our route, we see: This function will run once before_request is running! index is running! root bar after_request is running! root bar teardown_request is running! We can see that teardown_request ran as expected as it will run regardless of whether an exception was raised or not. Running our app and raising the ValueError in our route, we see: This function will run once before_request is running! [2019-04-10 15:08:59,003] ERROR in app: Exception on / [GET] Traceback (most recent call last): ... teardown_request is running! after_request will not run Digging through the debug message in the console, we see that teardown_request has run and the message raised in the exception has been passed to it and printed, whereas after_request was not triggered. All together Putting everything together, our application looks like this: from flask import Flask from flask import session, g app = Flask(__name__) app.secret_key = "iufh4857o3yfhh3" @app.before_first_request def before_first_request_func(): """ This function will run once before the first request to this instance of the application. You may want to use this function to create any databases/tables required for your app. """ print("This function will run once ") @app.before_request def before_request_func(): """ This function will run before every request. Let's add something to the session & g. It's a good place for things like establishing database connections, retrieving user information from the session, assigning values to the flask g object etc.. We have access to the request context. """ session["foo"] = "bar" g.username = "root" print("before_request is running!") @app.route("/") def index(): """ A simple route that gets a session value added by the before_request function, the g.username and returns a string. Uncommenting `raise ValueError` will throw an error but the teardown_request funtion will still run. """ # raise ValueError("after_request will not run") username = g.username foo = session.get("foo") print("index is running!", username, foo) return "Hello world" @app.after_request def after_request_func(response): """ This function will run after a request, as long as no exceptions occur. It must take and return the same parameter - an instance of response_class. This is a good place to do some application cleanup. """ username = g.username foo = session.get("foo") print("after_request is running!", username, foo) return response @app.teardown_request def teardown_request_func(error=None): """ This function will run after a request, regardless if an exception occurs or not. It's a good place to do some cleanup, such as closing any database connections. If an exception is raised, it will be passed to the function. You should so everything in your power to ensure this function does not fail, so liberal use of try/except blocks is recommended. """ print("teardown_request is running!") if error: # Log the error print(str(error)) if __name__ == "__main__": app.run() Wrapping up Using some of Flasks built in decorators allows us to add another layer of functionality and validation to our applications and should be taken advantage of! We've only provided a few examples here, and you'll find other useful functions for working with requests, both before and after over at the link to the Flask API documentation below:
https://pythonise.com/series/learning-flask/python-before-after-request
CC-MAIN-2021-17
refinedweb
1,495
58.58
If a row changes during a session connection, select on this row will not retrieve the changes but the old snapshot of the row. However, an update to the row does recognize changes because the update will fail if the where is structured to only select based on the old snapshot. For example, run program number 1 multiple times during the running of program 2, and the output will tell the story (note that the tables are InnoDB types): PROGRAM 1: import MySQLdb db = MySQLdb.connect("localhost", "root", "", "DEV") cursor = db.cursor() cursor.execute("set autocommit = 0") cursor = db.cursor() cursor.execute("update mytable set version = version + 1, sys_mod_date = sysdate() where name = 'me'") db.commit() cursor.execute("select * from mytable where name = 'me'") rs = cursor.fetchone() print str(rs) PROGRAM 2: import MySQLdb db = MySQLdb.connect("localhost", "root", "", "DEV") cursor = db.cursor() cursor.execute("set autocommit = 0") while 1: raw_input("hit enter") cursor = db.cursor() cursor.execute("select * from mytable where name = 'me'") rs = cursor.fetchone() print str(rs) Andy Dustman 2003-07-07 Logged In: YES user_id=71372 This sounds like normal behavior. Nobody/Anonymous 2003-07-07 Logged In: NO Doesn't this have something to do with MySQL's default transaction level being set to repeatable read? And therefore the Python stuff is behaving as it should? Kevin Smith 2003-07-10 Logged In: YES user_id=638620 Thanks for the info, I have retested the scenarios with the knowledge that MySQL's default behaviour is set to repeatable reads. If I place a commit statement after the select statement in program 2, everything works "correctly" This is because the commit will advance the timepoint for the read. It should be noted that it may be safer to issue a rollback statement as this will also advance the timeout - it all depends on what you are trying to or not trying to accomplish. This bug should be rejected as it is part of "normal" system behaviour. Thanks for taking the time as it has cleared up a rather irksome issue! Cheers
http://sourceforge.net/p/mysql-python/bugs/59/
CC-MAIN-2014-49
refinedweb
343
63.49
Python wrapper for the LibRaw library rawpy is an easy-to-use Python wrapper for the LibRaw library. It also contains some extra functionality for finding and repairing hot/dead pixels. Load a RAW file and save the postprocessed image using default parameters: import rawpy import imageio path = 'image.nef' with rawpy.imread(path) as raw: rgb = raw.postprocess() imageio.imsave('default.tiff', rgb) Save as 16-bit linear image: with rawpy.imread(path) as raw: rgb = raw.postprocess(gamma=(1,1), no_auto_bright=True, output_bps=16) imageio.imsave('linear.tiff', rgb) Find bad pixels using multiple RAW files and repair them: import rawpy.enhance paths = ['image1.nef', 'image2.nef', 'image3.nef'] bad_pixels = rawpy.enhance.find_bad_pixels(paths) for path in paths: with rawpy.imread(path) as raw: rawpy.enhance.repair_bad_pixels(raw, bad_pixels, method='median') rgb = raw.postprocess() imageio.imsave(path + '.tiff', rgb) Binaries are provided for Python 2.7, 3.4, 3.5, and 3.6. These can be installed with a simple pip install rawpy (or pip install --use-wheel rawpy if using pip < 1.5). You need to have the LibRaw library installed to use this wrapper. On Ubuntu, you can get (an outdated) version with: sudo apt-get install libraw-dev Or install the latest release version from the source repository: git clone libraw git clone libraw-cmake cd libraw git checkout 0.18.2 cp -R ../libraw-cmake/* . cmake . sudo make install After that, it’s the usual pip install rawpy. If you get the error “ImportError: libraw.so: cannot open shared object file: No such file or directory” when trying to use rawpy, then do the following: echo "/usr/local/lib" | sudo tee /etc/ld.so.conf.d/99local.conf sudo ldconfig The LibRaw library is installed in /usr/local/lib (if installed manually) and apparently this folder is not searched for libraries by default in some Linux distributions. rawpy depends on NumPy. The minimum supported NumPy version depends on your Python version: Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/rawpy/
CC-MAIN-2017-26
refinedweb
346
52.36
Recommendation Systems are a type of system that aims at improving the quality of search results and provides/suggests things that are more relevant to the search history of the user. They help to understand what might a user prefer and in this tutorial today, we will build an application that will suggest which movie to watch to the user. Let’s get started! Also Read: Theoretical Introduction to Recommendation Systems in Python In this tutorial, we will be using TMDB 5000 Movie Dataset which can be found here. We will load the two datasets mentioned on the website using the following code. We will also join the two datasets on the basis of the ‘id’ column of the two datasets. import pandas as pd import numpy as np df1=pd.read_csv('tmdb_5000_credits.csv') df2=pd.read_csv('tmdb_5000_movies.csv') df1.columns = ['id','tittle','cast','crew'] df2= df2.merge(df1,on='id') Next, we will be deciding on a metric to judge which movie is better than the others. One way is that we could use the average ratings of the movie given in the dataset directly. But it won’t be fair because of the inconsistency in the number of voters for a particular movie. Hence, we will be using the IMDB's weighted rating (wr) which is mathematically described as below – In the above formula, we have, v – Number of votes m – Minimum votes required to be listed R – Average rating of the movie C – Mean vote Let’s compute the value for the qualified movies using the code below by computing the mean average votes and then computing the minimum votes required for the movie by taking only the movies with 90% more voters than the other movies into consideration. C= df2['vote_average'].mean() print("Mean Average Voting : ",C) m= df2['vote_count'].quantile(0.9) print("\nTaking the movies which have 90% more voters than the other movies") print("Minimum votes required : ",m) Now, let us filter out the most popular and recommended movies using the code snippet below. q_movies = df2.copy().loc[df2['vote_count'] >= m] But we still haven’t computed the metric for each movie that qualified. We will define a function, weighted_rating where we define a new feature score that will help us to calculate the value for all the qualified movies using the code below. def weighted_rating(x, m=m, C=C): v = x['vote_count'] R = x['vote_average'] return (v/(v+m) * R) + (m/(m+v) * C) q_movies['score'] = q_movies.apply(weighted_rating, axis=1) Finally, let’s sort the whole dataframe on the basis of the score column and look at the most recommended movies out of all the other movies. q_movies = q_movies.sort_values('score', ascending=False) Let’s try to visualize the sorted dataset using the code below and know the most popular movies out of the whole dataset. pop= df2.sort_values('popularity', ascending=False) import matplotlib.pyplot as plt plt.figure(figsize=(12,4),facecolor="w") plt.barh(pop['title'].head(10),pop['popularity'].head(10), align='center',color='pink') plt.gca().invert_yaxis() plt.xlabel("Popularity Metric") plt.title("Name of the most Popular Movies") plt.show() Look how nice the plot looks and we can see that out of the top 10 movies, Minions is the most popular and recommended movie. Congratulations! You built a successful movie recommendation system using python programming language! Also Read: - Python: Moviepy Module - Python Tkinter: Random Movie Suggestions - Fetch Data From a Webpage Using Selenium [Complete Guide]
https://www.askpython.com/python/examples/movie-recommendation-system
CC-MAIN-2022-33
refinedweb
581
52.7
WHAT’S IN THIS CHAPTER? WROX.COM CODE DOWNLOADS FOR THIS CHAPTER The wrox.com code downloads for this chapter are found at on the Download Code tab. The code for this chapter is divided into the following major examples: This chapter explains how to get real-time information about your running application in order to identify any issues that it might have during production or to monitor resource usage to ensure that higher user loads can be accommodated. This is where the namespace System.Diagnostics comes into play. This namespace offers classes for tracing, event logging, performance counts, and code contracts. One way to deal with errors in your application, of course, is by throwing exceptions. However, an application might not fail with an exception, but still not behave as expected. The application might be running well on most systems but have a problem on a few. On the live system, you can change the log behavior by changing a configuration value and get detailed live information about what’s going on in the application. This can be done with tracing. If there are problems with applications, the system administrator needs to be informed. The Event Viewer is a commonly used tool that not only the system administrator ... No credit card required
https://www.safaribooksonline.com/library/view/professional-c-2012/9781118332122/xhtml/Chapter20.html
CC-MAIN-2018-34
refinedweb
213
57.47
I have a module that invokes a service let getData = () => fetch("") .then(response => response.json()) .then(json => (getData = json)); export {getData }; I try to log to console the result (and put it on an HTML page) like this import { getData } from "./api"; const app = document.querySelector("#target"); let data = getData() .then(res => res.map(r => r.title).join("\n")) .then(res => (data = res)); console.log(data); app.innerHTML = data; However, I get an unresolved promise like this [object Promise] I've tried a few variations which also don't work // none of these work don't work // .then(async res => data = await res); // .then(res => (data = Promise.resolve(res))); Any suggestions as to what I am doing wrong? i think you have to 'return' response.json() and then exec another then() and also is better to include cathc to see problems read Making fetch requests
https://techqa.club/v/q/how-to-retrieve-results-after-api-call-which-is-returning-a-promise-c3RhY2tvdmVyZmxvd3w1NjE5ODA3Mg==
CC-MAIN-2021-39
refinedweb
145
51.75
Get USB Drive Serial Number on Windows in C++ Getting the serial number of a USB device in Windows is a lot harder than it should be. ( But it’s a lot easier than getting the USB serial number on Os X!) It is relatively simple to get USB information if you have the device handle of the USB device itself. And you can get information about a mounted volume pretty easily. But to match up a mounted volume with a USB device is tricky and annoying. There are many code examples on the net about how to do it in C#, visual basic, and similar broken garbage languages. If you are forced to program in C# or visual basic – get help. There are alternatives to suicide. There are some example of how to do it through WMI, which is a Windows operating system service – which is slow, buggy and unreliable. If you have to do it in WMI, you are beyond help. First here is the code: #include <WinIOCtl.h> #include <api/usbioctl.h> #include <Setupapi.h> DEFINE_GUID( GUID_DEVINTERFACE_USB_DISK, 0x53f56307L, 0xb6bf, 0x11d0, 0x94, 0xf2, 0x00, 0xa0, 0xc9, 0x1e, 0xfb, 0x8b ); void getDeviceInfo( int vol ) { UsbDeviceInfo info; // get the device handle char devicePath[7] = "\\\\.\\@:"; devicePath[4] = (char)( vol + 'A' ); HANDLE deviceHandle = CreateFile( devicePath, 0, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, 0, NULL ); if ( deviceHandle == INVALID_HANDLE_VALUE ) return; // to get the device number DWORD volumeDeviceNumber = getDeviceNumber( deviceHandle ); CloseHandle( deviceHandle ); // Get device interface info set handle // for all devices attached to the system HDEVINFO hDevInfo = SetupDiGetClassDevs( &GUID_DEVINTERFACE_USB_DISK, NULL, NULL, DIGCF_PRESENT | DIGCF_DEVICEINTERFACE ); if ( hDevInfo == INVALID_HANDLE_VALUE ) return; // Get a context structure for the device interface // of a device information set. BYTE Buf[1024]; PSP_DEVICE_INTERFACE_DETAIL_DATA pspdidd = (PSP_DEVICE_INTERFACE_DETAIL_DATA)Buf; SP_DEVICE_INTERFACE_DATA spdid; SP_DEVINFO_DATA spdd; spdid.cbSize = sizeof( spdid ); DWORD dwIndex = 0; while ( true ) { if ( ! SetupDiEnumDeviceInterfaces( hDevInfo, NULL, &GUID_DEVINTERFACE_USB_DISK, dwIndex, &spdid )) break; DWORD ) { DWORD usbDeviceNumber = getDeviceNumber( hDrive ); if ( usbDeviceNumber == volumeDeviceNumber ) { fprintf( "%s", pspdidd->DevicePath ); } } CloseHandle( hDrive ); } } dwIndex++; } SetupDiDestroyDeviceInfoList(hDevInfo); return; } You pass in the volume number. This is just the drive letter, represented as an integer. Drive “A:” is zero. So the first thing we do is create a drive path. For example, if the mounted volume you want to get a serial number for is “F:”, you’d pass in “5”, and construct a device path \\.\F:. Next you get a device handle for that volume using CreateFile(). Originally this function was meant to create regular file system files. But today it can be used to open handles to devices of all kinds. Each device type is represented by different device paths. Next, you get the device number. When a volume is mounted, it will be associated with a device, and this function returns its number. Why the OS doesn’t just give you a device path here is ridiculous. The device numbers will be low. Typically a number under 10. Don’t be surprised. I was. You get the device number by calling DeviceIOControl() with the handle to your device: DWORD getDeviceNumber( HANDLE deviceHandle ) { STORAGE_DEVICE_NUMBER sdn; sdn.DeviceNumber = -1; DWORD dwBytesReturned = 0; if ( !DeviceIoControl( deviceHandle, IOCTL_STORAGE_GET_DEVICE_NUMBER, NULL, 0, &sdn, sizeof( sdn ), &dwBytesReturned, NULL ) ) { // handle error - like a bad handle. return U32_MAX; } return sdn.DeviceNumber; } There is an old windows API, in setupaip.h that was designed to help write installers. It has now been obsoleted by newer installation APIs, but you can still use it to enumerate devices. Basically you pass in the GUID of the type of device interface you want. In this case we just want to enumerate USB flash disks. That guid is defined at the top of the file. You setup for enumeration with SetupDiGetClassDevs(), and iterate over the devices with SetupDiEnumDeviceInterfaceDetail(). When you are done you close the iterator with SetupDiDestroyDeviceInfoList(). Then for each device you get the device name and number using SetupDiGetDeviceInterfaceDetail, and match up the device number of each device with the one you got for the volume. When you find a match then you have the device path for your actual USB flash drive. At that point you could start querying the device itself using functions like DeviceIoControl(), but in this case the information we want is coded right into the device path Here is a typical device path for a USB flash disk: \\?\usbstor#disk&ven_cbm&prod_flash_disk&rev_5.00#31120000dc0ce201&0# {53f56307-b6bf-11d0-94f2-00a0c91efb8b} The device path for a flash disk must start with “usbstor”. This first part of the path is name of the driver, which on windows is called “usbstor”. The vendor id, product id, product revision and serial number are highlighted in red. The following regular expressions will extract this information from the device path: ven_([^&#]+) // vendor id prod_([^&#]+) // product id rev_([^&#]+) // revision id &[^#]*#([^&#]+) // serial number Next here is a method to recognize if a volume is removable media (e.g. like a usb or firewire disk): bool isRemovableMedia( s32 vol ) { char rootPath[5] = "@:\\"; rootPath[0] = (char)( vol + 'A' ); char szDosDeviceName[MAX_PATH]; char dosDevicePath[3] = "@:"; // get the drive type UINT DriveType = GetDriveType( rootPath ); if ( DriveType != DRIVE_REMOVABLE ) return false; dosDevicePath[0] = (char)( vol + 'A' ); QueryDosDevice( dosDevicePath, szDosDeviceName, MAX_PATH ); if ( strstr( szDosDeviceName,"\\Floppy") != NULL ) { // its a floppy return false; } return true; } 20 Responses Good article, thanks. Can you provide c++ code using regular expressions to extract the serial number? I use my own regular expression class, and to work it requires a whole bunch of my supporting libraries, so it wouldn’t make sense to post it here. I know that the STL now supports regular expressions. Try #include <regexp>, and using the regexpclass. There is a tutorial on it here: What if the USB device is not a drive? What if it’s a mouse? Or is everything on USB handled as a “drive”? Without going into all the gory details, I have put some additional circuitry into an old USB mouse to monitor some external events by whether the computer detects the old mouse or not. Now, all I need is the software component, under Windows, that would monitor that old mouse. Of course, the thing is, I need to make sure the program monitors the CORRECT USB device. Another problem is that I am a Unix/Linux person (and could do this with a simple script under Linux), but this aspect of Windows is an alien environment for me. I have installed the free Visual C++ Express, and what you mentioned in your article sounds good, but I don’t know if I’ll be able to apply it to a mouse, instead of a drive. What do you think? Excellent post! How to go about getting the serial number of a fixed disk drive? Thanks. Great post! One question- where is getDeviceNumber defined? I can’t get this to compile because of that function. That is one of my own functions. I’ll update the article with the implementation for that. But its pretty simple you get it from a call to DeviceIOControl. select DeviceID from Win32_USBHub where Name=’USB Mass Storage Device’ I found on my USB drives, this string includes a unique id for the drive. how to include the three header files.. #include #include #include i didn’t find any source for these three files.. please help me … Do you mean the three headers from the first code listing? Like “WinIOCtl.h”? These are standard windows header files. You should have them if you install Microsoft’s c++ compiler (Visual Studio). It comes with the windows platform sdk. These should be included with that. #include comes from the Windows Driver Kit, not from Visual Studio. Reshma should download the WDK from the link that you gave. The other two are in the SDK. Although your GUID is correct, you don’t have to do it. DEFINE_GUID( GUID_DEVINTERFACE_USB_DISK, 0x53f56307L, 0xb6bf, 0x11d0, 0x94, 0xf2, 0x00, 0xa0, 0xc9, 0x1e, 0xfb, 0x8b ); Instead, you can get it from the WDK. To use the WDK, this looks redundant but you have to do it this way. #include “ap/\Ntddstor.h” #include “initguid.h” #include “api/Ntddstor.h” UsbDeviceInfo info; isn’t needed. In order to compile, just delete it ^_^ I will try to correct an editing error in my previous reply. If the owner edits my previous reply and deletes this one, that will be good. #include “api/Ntddstor.h” #include “initguid.h” #include “api/Ntddstor.h” (The redundancy is that Ntddstor.h has to be included twice. The pathname shouldn’t be messed up like I did.) Sorry, I have to edit this again. After this: #include “api/Ntddstor.h” #include “initguid.h” #include “api/Ntddstor.h” Use GUID_DEVINTERFACE_DISK instead of GUID_DEVINTERFACE_USB_DISK In the device info in your example, Windows has set the device’s serial number to 31120000dc0ce201 I bet the device’s actual serial number might be 31120000DC0CE201 Windows’s case insensivity is OK for Windows, but it might not be OK for some application that needs to know the real serial number. Actually Windows did more than just be case insensitive. In filenames, Windows is case insensitive in matching names, but when writing names in the first place it preserves the case that the user used when creating the file. If you look in Windows Explorer or a dir command, You CAN Still See Names like tHis. But Windows gives us the device’s serial number as if it were all lower case. Does anyone know how to find the real serial number? Can this code work in Windows 8.1 Metro Style app? Could not get this to work, do you have the source code? I get a lnk2019 error maybe i falied to add some libs? I was trying the same on python,. when i saw your post, i tried to compile it (I am new to all programming, i installed mingw ) with this command g++ findserial.cpp -o findserial.out i get this following error findserial.cpp:2:26: fatal error: api/usbioctl.h: No such file or directory compilation terminated. can anyone please help In case you need the serialnumber to be case sensitive, you can utilize the following code to extract it from the registry. The parameter usbDeviceId is expected to be in the format “USB\\VID_XXXX&PID_XXXX” and can be obtained using CM_Get_Device_ID. Although my example uses Qt it might help: QString ExtractSerialNumberFromRegistry(const QString& usbDeviceId) { QString serial; QStringList parts = usbDeviceId.split(“\\”); if (parts.size() == 3) { QString parentNode(“SYSTEM\\CurrentControlSet\\Enum\\%1\\%2”); HKEY hKey; long result = RegOpenKeyEx( HKEY_LOCAL_MACHINE, reinterpret_cast(parentNode.arg(parts.at(0)).arg(parts.at(1)).utf16()), 0, KEY_READ, &hKey); if (result == ERROR_SUCCESS) { wchar_t* subkey = new wchar_t[255]; DWORD subkey_length = 255; DWORD counter = 0; while (result != ERROR_NO_MORE_ITEMS) { result = RegEnumKeyEx(hKey, counter, subkey, &subkey_length, 0, NULL, NULL, NULL); if (result == ERROR_SUCCESS) { // find best match using the uppercase serial nested in the usb device id const QString currSerial(reinterpret_cast(subkey)); if (result == ERROR_SUCCESS && parts.last().toUpper() == currSerial.toUpper()) serial = currSerial; } ++counter; } delete subkey; subkey = 0; RegCloseKey(hKey); } } return serial; } By the way, removing the “api/usbioctl.h” include and the line with “UsbDeviceInfo” seems to work fine also. That looks like a spurious dependency.
https://oroboro.com/usb-serial-number/
CC-MAIN-2020-40
refinedweb
1,842
57.57
writev() Write bytes to a file Synopsis: #include <sys/uio.h> ssize_t writev( int fildes, const iov_t* iov, int iovcnt ); Arguments: - fildes - The file descriptor for the file you want to write in. - iov - An array of iov_t objects that contain the data that you want to write. - iovcnt - The number of elements in the array. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The writev() function performs the same action as write() , but gathers the output data from the iovcnt buffers specified by the members of the iov array: iov[0], iov[1], …, iov[iovcnt-1]. For writev(), the iov_t structure contains the following members: - iov_base - Base address of a memory area from which data should be written. - iov_len - The length of the memory area. The writev() function always writes a complete area before proceeding to the next. The maximum number of entries in the iov array is UIO_MAXIOV. If writev() is interrupted by a signal before it has written any data, it returns a value of -1, and errno is set to EINTR. However, if writev() is interrupted by a signal after it has successfully written some data, it will return the number of bytes written. For more details, see the write() function. Errors: - EAGAIN - The O_NONBLOCK flag is set for the file descriptor, and the process would be delayed in the write operation. - EBADF - The file descriptor, fildes, isn't a valid file descriptor open for writing. - EFBIG - The file is a regular file, where nbytes is greater than 0, and the starting position is greater than or equal to the offset maximum associated with the file. - EINTR - The writeOSPC - There is no free space remaining on the device containing the file. - ENOSYS - The write() function isn't implemented for the filesystem specified by filedes. - EPIPE - An attempt was made to write to a pipe (or FIFO) that isn't open for reading by any process. A SIGPIPE signal is also sent to the process. Classification: Last modified: 2013-12-23
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/w/writev.html
CC-MAIN-2014-15
refinedweb
347
64.41
1. svn co /somewhere/in/the/PYTHONPATH/localdates 2. Add 'localdates' in your settings' INSTALLED_APPS 3. In the templates that you want to use the localdates filter, add {% load local_datefilter %}. 4. You can now use the ldate filter, like that: {{ date_object|ldate:"d {Fp} Y" }} {{ date_object|ldate:"{FULL_DATE}" }} {{ date_object|ldate:"d F Y" }} You can browse the source code to find the available format strings - once they're finalized I'll write up documentation for them. 5. Checkout the test-localdates project for sample usage. (svn co test-localdates) 6. Probably your locale wouldn't be available in the application. If you want to add it, do a make-messages.py -l lang, from the localdates directory, so that the django.po file will be generated. You can then add your locale's datestrings, do a compile-messages.py and reload. You may need to restart the dev server. Don't forget to attach the file in the issue tracker so I can update the source. 7. Check out the sample django/contrib/localflavor-sample package for ways of extending the current functionality. NOTE: It's best to contact me directly, as this might change. Hi! I use following middleware: class DjangoLocaleSwitchMiddleware(object): def process_request(self, request): lang = get_language() loc = 'en_US.UTF-8' if lang == 'ru': loc = 'ru_RU.UTF-8' elif lang == 'kk': loc = 'kk_KZ.UTF-8' locale.setlocale(locale.LC_ALL, loc) Of course it's not so flexible as yours, but it's simpler Hi! I use following middleware: Of course it's not so flexible as yours, but it's simpler
http://code.google.com/p/django-localdates/wiki/Usage
crawl-003
refinedweb
263
60.72
Mapping Examples Clearly if you have a fourth article in the trilogy you need to go for the money grab and write the fifth, so here it is! Note: Many years ago Dan Shusman told me that mapping globals is an art form. There is no right or wrong way of doing it. The way you interpret the data leads you to the type of mapping you do. As always there is more than one way to get to a final answer. As you look through my samples you will see there are some examples that map the same type of data in different ways. At the end of this article there is a zip file with the collection of mapping examples I have written for customers over the years. Below, I will point out a couple that will highlight some things I referenced in my 4 previous mapping articles. As this is a pure money grab there will not be the same level of detail as the last 4. If something doesn’t make sense please let know and I will go into more detail with you. Row ID Spec: Example Class: Mapping.RowIdSpec.xml I promised this several times in earlier articles. You need to define this only when your subscript expressions are not simple fields. In my example I am taking the value that is stored in the subscript and multiplying it by 100, so when I look at the global I need to have 1.01 but I want the logical value to be 101. So Subscript 2 has an expression of {ClassNumber}/100 and the RowIdSpec is {L2}*100. I always do this backwards the first time. The Subscript Expression is taking the logical value and use that in the $ORDER() of the global so divide by 100. The RowId Spec is taking the value from the global and building the logical value so multiply by 100. You could also do this by writing Next Code and an Invalid Condition that would handle the multiplying and dividing. Subscript Expression Type Other / Next Code / Invalid Condition / Data Access: Example Class: Mapping.TwoNamespacesOneGlobal.xml Wow what a gold mine this class is! This one class uses half the stuff I want to talk about. This class loops over the same global in 2 different namespaces. We can’t use subscript level mapping because both namespaces use the same subscript values. Instead we are going to use extended Global Reference syntax, ^|namespace|GlobalName, first looping over the global in the USER Namespace and then looping over the same global in the SAMPLES namespace. The IDKey for this table will be made up of 2 parts: Namespace and Sub1. Subscript Expression Other: In Subscript Level 1 we are not doing a $ORDER() or $PIECE(), instead we are setting {L1} to 1 of 2 hard coded values: USER or SAMPLES. No global is used at all here so the Type is ‘Other’. Next Code: If you are using a Subscript Expression of Type ‘Other’ you will need to provide Next Code (OK so you could write a complex expression to do this, but either way you are providing the code). The first time the Next Code gets called {L1} will be set to the ‘Start Value’, the default is the empty string. Your next code should set {L1} equal to Empty string when you are done looping. For this example {L1} will be set to “USER”, “SAMPLES” and “” the three times it is called. The Next Code in Subscript Level 2 will get reset to “” for the 2 good values returned by Subscript Level 1. The Next Code here is just a simple $$ORDER() over the global with the Extended Reference. Invalid Condition: Both Subscript Levels 1 and 2 have Next Code that means they both need an Invalid Condition. Given a value for this Subscript Level the condition should return 1 if it is a bad value. For {L1} if the value is not “USER” or “SAMPLES” it will return 1. For example: SELECT * FROM Mapping.RowIdSpec WHERE NS = “%SYS” will return no rows because of the Invalid Condition for {L1} Data Access: Say you have a global with an IdKey based on 2 properties in a global like ^glo({sub1},{Sub2}) If you provide a value for {Sub1} before we start loop on {Sub2} we will check to see if there is data at the previous level by doing a $DATA(^glo({Sub1}). In this example we do not have a global in Subscript Level 1 so we need to be told what to test. This is the Data Access Expression: ^|{L1}|Facility. The next example also needs a Data Access expression and it might be easier to follow. If this one is confusing, look at the next example. Data Access / Full Row Reference: Example Class: Mapping.TwoNamespacesOneGlobal2.xml This class is mapping the same data as the previous one. The difference is in Subscript Level 1. Instead of hard coding the values for the Namespace, this class has a second global containing the Namespaces we want to loop over: ^NS(“Claims”,{NS}). Not only does this simplify the mapping, it make thing more flexible as you can add a new namespace just by setting a global instead of modifying the class mapping. Data Access: The ‘Global’ in the mapping is defined as ^NS since that is the global we are looping over in Subscript Level 1 and 2. For Subscript Level 3 we want to switch to ^|{NS}|Facility({Sub1}). Before we can use a different Global in the Next Code we need to define it in the ‘Data Access’. {L1} was just a constraint in the ^NS global and is not used in the ^Facility global so we just level it out. {L2} now is used as the Namespace Reference in the Extended Global syntax: ^|{L2}|Facility. In this class the ‘Next Code’ is not defined (so it was not needed in the last example either). The class compiler takes the ‘Data Access’ and uses that to generate the $ORDER() needed at this level. Full Row Reference: Similar to the ‘Invalid Condition” this is used to reference the row if all parts of the IdKey are used: ^|{L2}|Facility({L3}). I think we could get away without defining the ‘Full Row Reference’ for this class, but it does not hurt. If you have ‘Next Code’ that changes the logical value of a subscript before looping over the global then you would need to define the ‘Full Row Reference’. For example as I said above, the RowIdSpec class could have been done using ‘Next Code’. If you went that route the ‘Full Row Reference’ would have been ^Mapping({L1},{L2}/100). Subscript Access Type Global: Example Class: Mapping.TwoGlobals.xml This class is displaying data from 2 different globals: ^Member and ^Provider. In the last example we started in one global and then switched to a different global to get at a row. In this case we have two globals that both contain rows. We will loop over the all the rows in the first global and then loop over the rows in the other global. Subscript Expression Type Global: When you defined the first subscript level as ‘Global’ the map needs to have the Global Property set to “*”. I bet we could have used this style of mapping to do Mapping.TwoNamespacesOneGlobal. Remember you are an artist when creating these mappings! In {L1} the Access Type is ‘Global’ and the ‘Next Code’ will return “Member”, “Provider” and “”. By defining the Access Type as ‘Global’, the compiler knows to use {L1} as a global name so {L2} and {L3} are just simple subscripts. {L4} is also a subscript but it has some extra code to deal with the fact that the two globals store the properties in different locations. This class shows one more interesting thing in the Data section of the mapping. If we were only mapping the ^Member global I would have only defined three subscript levels and the Data section would have had Node values of 4, 5, 6, 7, 8 and 9. To deal with Zip Code being in node 9 or in node 16 we add the base node to the IdKey 4 or 10, and use +6 in the Node to get the offset to the Zip Code. Access Variables: Example Class: Mapping.SpecialAccessVariable.xml Special Access Variables can be defined at any subscript level. There can be more than 1 per level. In this example we have one defined called {3D1}. The 3 means it is defined at subscript level 3 and the 1 means it is the first variable defined at this level. The compiler will generate a unique variable name for this and handle defining it and removing it. The variable gets defined after you have executed the ‘Next Code’. In this example I want to use the variable in the ‘Next Code’ of level 4 so I have it defined in level 3. In this example I am using {3D1} to “remember” where we were in the looping so we can change an invalid date value to “*” but still come back to the same place in the looping. Bitmaps: Example Class: Mapping.BitMapExample.xml While Bitmap indices are kind of new, it doesn’t mean you would not want to add them to your application that is using Cache SQL Storage. You can add a Bitmap index to any class that has a simple positive %Integer IdKey. You define the ‘Type’ as “bitmap” and you do not include the IdKey as a subscript. The one thing you need to remember when you define a bitmap is that you also need a Bitmap Extent. Again nothing special, the Extent is an index of IdKey values, so no subscripts are needed and the ‘Type” is “bitmapextent”. The class has methods you can call to maintain the bitmap indices as you modify the data via Sets and Kills. If you can use SQL or Objects to make changes then the indices will be maintained automatically. So some of those examples are a little involved! Have a look at the classes and see if you can make heads or tails of this. If not let me know: Brendan@interSystems.com, always happy to help a struggling artist. If you want to jump back and look at the other mapping articles here are the links: The Art of Mapping Globals to Classes (1 of 3) The Art of Mapping Globals to Classes (2 of 3) The Art of Mapping Globals to Classes (3 of 3) The Art of Mapping Globals to Classes (4 of 3) Hi Brendan, I'm trying to map a global that exists in different namespaces. So the name of the namespace should be one of the subscripts I suppose. Is there an example somewhere for this problem? Regards, Jo Hi Jo the second example talks about this very case, it is limited to 2 namespaces, but you can generalize the code to look at as many Namespaces as you want. The class is: Mapping.TwoNamespacesOneGlobal.xml There is a link at the end of the article to let you download a zip file with all my examples. If that does not work please let me know and I can send it to you directly. brendan Hi, I am trying to map a global that has a subscripted variable as the first subscript. The system is based on Fileman, so using the Fileman2Class mapper does not handle this so any query against this table throws an undefined error. i.e. ^BARAC(1234,1,0)= ^BARAC(1234,1,3)= ^BARAC(1234,2,0)= ^BARAC(1234,2,3)= In the FM2Class generated class, Subscript 1 = DUZ(2), Subscript 2 = {IEN} What do I need to do in order to either get DUZ(2) mapped appropriately? Is it something I can do inside the generated class or would I need to create the class manually. Thanks, Charles as a first suggestion, I would create a setter and getter method for this property that handles DUZ(2) in both directions. If you add any property by the wizard it shows you the exact naming of the methods. this property then goes in the first position of the IDkey be aware of ProcedureBlock to have always access to your DUZ(2) or name it %DUZ(2) as hack Since Setter/Getter is an object concept, how would a SQL statement utilize it? Would the property in the class need to have a SQLComputeCode that calls the object's set to drive the assignment of the variable from a passed parameter? At what point in the query execution would the assignment happen? for pure object access, you have a getter and a setter method. no magic if also want to use it for SQL - you need SqlComputed code. This will also replace your (object)Getter. - to set your local variable also by INSERT or UPDATE you need to add UPDATE and INSERT Trigger code. example; { Property DUZ As %String [ Calculated, SqlComputed, SqlColumnNumber = 2, SqlComputeCode = {set {*} = DUZ(2)} ]; // for settig object property Method DUZSet(Arg As %String) As %Status [ ServerOnly = 1 ] { set DUZ(2)=Arg Quit $$$OK } Trigger UpdTrigger [ Event = UPDATE ] { set DUZ(2)= Trigger InsTrigger [ Event = INSERT ] { set DUZ(2)= -- To anticipate critics: Some people may say it's dirty coding. YES !! But it works. I don't think this is the right direction. I believe the problem is DUZ(2) is not always defined so queries get an <UNDEFINED> error. The user should not need to know anything about this variable, and if it is used to limit what data a person can see they REALLY should not have access to this variable. In all other cases of mapping stuff like this the application would take care of setting the variable up and the user had no knowledge of the variable at all. Using SetServerInitCode() will make sure the variable is defined before the query is run. No changes to the class or your queries are needed. Brendan Brendan, I share your concerns. The initial request didn't mention Fileman at all. This is just a hint how a construct like this could be opened to SQL access. I've seen so many old MUMPS code that would have never taken its way to objects without. It#s clear that this requires wise use and careful handling. Charles DUZ(2) is a fileman variable. Before you can make use of this table you need to run some fileman code that will populate this variable. You can setup the system to execute the code for people trying to run queries over xDBC or from the Portal by using the following command $SYSTEM.SQL.SetServerInitCode(code) Where code would be a COS command to setup any needed variable. brendan Ok. Thank you.
https://community.intersystems.com/post/art-mapping-globals-classes-5-3
CC-MAIN-2019-47
refinedweb
2,486
70.02
Aurelia with TypeScript in Visual Studio Code on ASP.NET 5 This article covers setting up an Aurelia project in Visual Studio Code running ASP.NET 5 to serve the pages and using TypeScript, instead of the default Babel es6 transpiler. There are a few examples on the web on how to do this, but none felt like a smooth way of doing it in Visual Studio Code. So here is a shot at making it as easy as possible to get started with Aurelia in VS Code using TypeScript! Versions Disclaimer Check at the bottom of the article for a lengthy description of versions for all frameworks/tools used during the writing and testing of this post. Prerequisites There are a few things that are assumed, in this article, that you have installed on your system already. These are: Visual Studio Code, DNVM/DNX (ASP.NET 5), Yeoman with the ASP.NET generator and Node. If you want to follow along with the code examples, you should install them now if you haven’t already got them installed. Creating a new ASP.NET 5 project Open your command prompt of choice and position in a directory where you want to create the project. Create a new project by entering yo and then selecting the Aspnet generator. When given the choice, select Empty Application and name your new app. This will create a fairly empty ASP.NET 5 project for us that we can use. First of, position in the directory that just got created for your app via the command prompt. Installing jspm So, what is this “jspm”? Jspm is the module loader that Aurelia uses. Read more about it here. If you haven’t got jspm installed on your dev machine, it’s time to do that. Installing jspm is done with npm from your command prompt, to install it globally on your machine, enter the following: npm install jspm -g Configure jspm After installing it, it’s time to setup jspm. Positioned in the root folder of the project, type: jspm init A number of questions will be asked now, answer according to the examples below. The questions with nothing written after them above is using the default, thus just hit return to go to the next one. - Package.json file does not exist, create it? [yes]: -]:no Regarding the choice to NOT use a transpiler and set it up here I’ll delve deeper in to that at the end of the article. Setup ASP.NET 5 Now it’s time to start using Code 🙂 Open VS Code by typing “code .” in the prompt. Open the project.json file and add a reference under to dependencies section to: "Microsoft.AspNet.StaticFiles": "1.0.0-rc1-final" Now edit the Startup.cs file and modify the Configure method like this: public void Configure(IApplicationBuilder app) { app.UseIISPlatformHandler(); app.UseStaticFiles(); } This will enable our ASP.NET server to serve static files. Now, add a new html page to the wwwroot folder, name it index.html: <!DOCTYPE html> <html> <head> <title>Aurelia + TypeScript + ASP.NET 5</title> </head> <body aurelia-app> <script src="jspm_packages/system.js"></script> <script src="config.js"></script> <h1>Hello World!</h1> </body> </html> Now go to the command line, position in the root of the project and first do a package restore (if you haven’t already done that from within VS Code) then start up ASP.NET 5 with: dnu restore dnx web Open a browser and enter You should now be served with a page having only a heading saying: Hello World! If everything works so far, it’s time to get on with the fun stuff 😀 Installing Aurelia Now we need to install Aurelia and Corejs. In the command window, install the needed Aurelia bits with jspm by entering: jspm install aurelia-framework jspm install aurelia-bootstrapper jspm install core-js After a short while you’ll hopefully see an Install complete message (if not, a tip can be to check your GitHub download rate). Setting up TypeScript compilation with Gulp First we need to install a couple of npm packages. Open the package.json file in the root of the project. We need to add references to Gulp, TypeScript, Gulp-TypeScript and Merge. Edit the package.json like this: { "version": "1.0.0", "name": "Aurelia_HelloWorld", "private": true, "devDependencies": { "typescript": "^1.6.2", "gulp": "^3.9.0", "gulp-typescript": "^2.9.2", "merge": "^1.2.0" }, "jspm": { "directories": { "baseURL": "wwwroot" }, "dependencies": { "aurelia-framework": "npm:aurelia-framework@^1.0.0-beta.1.0.2" } } } Now we a config file to tell the TypeScript compiler how to behave, and a file describing the Gulp tasks. But first, add a file in the root and name it tsconfig.json, this file details the behavior of the TypeScript compiler: { "compilerOptions": { "emitDecoratorMetadata": true, "experimentalDecorators": true, "noImplicitAny": false, "noEmitOnError": true, "removeComments": true, "noExternalResolve": false, "declarationFiles": true, "target": "es5", "module": "amd", "sourceMap": true } } Now it’s time to define our Gulp tasks, create a new file in the project root, name it gulpfile.js: var gulp = require('gulp'), ts = require('gulp-typescript'), merge = require('merge'); var webroot = "wwwroot"; var paths = { tsSource: './AureliaLogic/**/*.ts', tsOutput: "./" + webroot, tsDef: "./typings/" };']); }); This will create a ts-compile and a watch task for us. Trying to run the compile task from VS Code (F1 – BackSpace – task ts-compile) will result in the following: Running gulp --tasks-simple didn't list any tasks. Did you run npm install? So from the command window, in the root of the project, type: npm install After the installation of the packages are done, test the compile task again, either by staying in the command prompt and typing gulp ts-compile or by heading back to VS Code and running the task from there. If everything was installed and setup correctly the TypeScript compile will finish a compilation and report no errors. Converting our app to an Aurelia app Now we need to edit our index.html file again, and load Aurelia! 🙂 Edit index.html and add a new script block under the other scripts that looks like this: <script> System.import("aurelia-bootstrapper"); </script> By default, Aurelia will look for a file named app.js to start up from, so let’s make sure we have one! Add a folder in the root, name it AureliaLogic. Now add a TypeScript file in the AureliaLogic folder, name it app.ts and edit it like this: import { AppRouter } from 'aurelia-router'; export class App { message: string; constructor(router: any) { this.message = "Hello from me!"; } } The app.js file needs a template to work with as well, add a file named app.html in the wwwroot folder: Now we need to compile our typescript to emit the app.js file that Aurelia will look for. Do that using the Gulp ts-compile task we set up earlier. However, the result will be a compilation error: AureliaLogic\app.ts(1,27): error TS2307: Cannot find module 'aurelia-router'. One more thing to fix 🙂 Setting up type definitions for TypeScript The reason that the TypeScript compiler fails is that it can’t find the aurelia-router type. So let’s help the compiler along the way! Aurelia has type definitions delivered with the code, we just need to be able to get to them in an easy way. There are many ways to solve this but my proposal is the folowing: - Install TSD – TSD is a package manager for TypeScript type files - Copy Aurelia’s type definition files by hand or create a Gulp task to copy definition files from the Aurelia module folders - Enable TSD to find all Aurelia definitions copied to the typings folder and bundle them The above setup will enable us to only reference one single definition file form our TypeScript code files, and still be able to have all references available for all added Aurelia modules. Install TSD TSD is easily installed with npm, so just head over to the command prompt again, position in the root of the project and type: npm install tsd After installation we need to configure and setup tsd by typing: tsd init This will create a folder in the root of the project called typings, here we will put all our typing files (*.d.ts). Create a Gulp task to copy d.ts files We need another npm module here, so in package.json we need to add a reference to flatten under devDependencies: "gulp-flatten": "^0.2.0" Then add a copy task to to the gulpfile: var gulp = require('gulp'), ts = require('gulp-typescript'), merge = require('merge'), flatten = require('gulp-flatten'); var webroot = "wwwroot"; var paths = { tsSource: './AureliaLogic/**/*.ts', tsOutput: "./" + webroot, tsDef: "./typings/", importedTypings: "./" + webroot + "/jspm_packages/**/*.d.ts" }; Then define a new task at the bottom of the gulpfile: gulp.task('copy-defs', function () { gulp.src(paths.importedTypings) .pipe(flatten()) .pipe(gulp.dest(paths.tsDef + "/jspmImports/")); }); Now install the new npm package by running: npm install And then run the copy script. If you don’t want to run the Gulp tasks from VS Code, it’s possible to run them from the console as well. Just position in the root of the project and type: gulp copy-defs This then executes the new copy task. The last thing to fix regarding typing is to manually install the type definitions for core-js, do this by typing: tsd install core-js Now we need to get all our newly copied type definitions into our tsd file. Console to the resque as usual and type: tsd rebundle If all has gone well a load of d.ts files will be added form our new jspmImports folder and the type def for core-js will be there as well. Add type references to our TypeScript file Now we can add a reference to the tsd file in our app.ts file, add the following at the top of the file /// <reference path="../typings/tsd.d.ts" /> Now run the TypeScript compile task again! Given that everything has worked through the previous steps it’s now time to test the result in the browser by starting the ASP.NET 5 web server again if it’s off by running: dnx web Then open a browser and locate localhost:5000/index.html. At first we will be seeing a classical Hello World, this will later be replaced by another message, as soon as Aurelia loads and takes over execution. Success! About choosing no transpiler So why not choosing a transpiler in the initial jspm setup? Mostly because when starting out it’s nice to have more control with running the TypeScript compilation with Gulp. And since Gulp was used for a few other tasks it’s neat to use. One thing I never demoed in the post was the use of the watch-task. This is a great task when everything is setup correctly, as it will monitor your TypeScript files and compile them as soon as a change is saved in them, makes for a great quick feedback loop! When choosing to setup TypeScript as compiler form jspm, it’s not necessary to compile the TypeScript files at all. This will be handled by a very cool plugin that will compile the TypeScript in the browser, but this also means you have to reload and watch the console for any compilation errors when working with it. Version Disclaimer At the time of writing this blog post, the following versions of tools and frameworks were used: Visual Studio Code v0.10.2 TypeScript v1.6.2 Aurelia 1.0.0-beta.1 tsd v4 Gulp v3.9.0 Gulp-typescript v2.9.2 Merge v1.2.0 jspm v0.16.15 node v0.12.2 dnvm vrc-1 The code was tested in Microsoft Edge. More TypeScript posts for the curious Tutorial on TypeScript Functions: TypeScript Functions Tutorial on TypeScript Classes: TypeScript Classes Tutorial for setting up a TypeScript – ASP.NET 5 app in VS2015: Getting started: TypeScript 1.5 in Visual Studio 2015 Tutorial for setting up a TypeScript – ASP.NET 5 app in Visual Studio Code: How-to: TypeScript in Visual Studio Code Setup of a TypeScript – MVC 6 app that uses RequireJS in VS2015: Setting up a TypeScript – RequireJS – Gulp – MVC 6 Web Project in Visual Studio 2015 RC Happy coding! 🙂 14 thoughts on “Aurelia, Visual Studio Code and TypeScript” Hey Andreas! Great post, really enjoyed it. I’ve found a small mistake. In the section “Setting up TypeScript compilation with Gulp” you mentioned the project.json instead of the package.json file (“First we need to install a couple of npm packages. Open the project.json file in the root [..]) to install the required npm packages. Also in the code listing title (“project.json with added devDependencies section”). Ouch! Thanks for the heads up, not good to edit the wrong .json 🙂 Thanks for reading it! You’re welcome! 🙂 Pingback: TypeScript Classes Part II | mobilemancer Pingback: TypeScript Classes Part III | mobilemancer Pingback: Getting started: TypeScript 1.5 in Visual Studio 2015 | mobilemancer I’m impressed, I must say. Actually rarely do I experience a blog that’s equally educative and interesting, and I would like to inform you, you’ve struck the nail on the head. Thanks 🙂 I am curious why would one use Aurelia with Asp.NET MVC ? Is that purely for SEO ? I noticed the Skeleton app uses Razor Syntax for MVC views. Why would we want to use both ? Hi, You don’t have to use ASP.Net/MVC with Aurelia. The Aurelia views are html intertwined with Aurelias binding syntax, no Razor involved 🙂 So it might be similar to the Razor syntax at times, but it’s actually not. Nor is there a need to host Aurelia from ASP.Net, in fact I’d bet most are not. I’m using it because I like MS tech, I work with MS tech professionally and like to play around with the new stuff on my spare time. Happy coding 🙂 If I follow this post to the letter, I keep getting ‘Duplicate identifier ‘$1” errors. Any chance you know why this happens? Actually – scratch that – I had copied in files from another project, and that caused issues It’s not always easy when the error reporting is not crystal clear 🙂 Glad you got it working though. Hope you’re going to enjoy Aurelia as much as I am. Happy coding! Pingback: TypeScript Classes – mobilemancer
https://mobilemancer.com/2015/11/30/aurelia-visual-studio-code-and-typescript/
CC-MAIN-2022-05
refinedweb
2,416
65.22
Pavel Emelyanov <xemul@parallels.com> writes:> On 06/03/2014 09:26 PM, Serge Hallyn wrote:>> Quoting Pavel Emelyanov (xemul@parallels.com):>>> On 05/29/2014 07:32 PM, Serge Hallyn wrote:>>>> Quoting Marian Marinov (mm@1h.com):>>>>> -----BEGIN PGP SIGNED MESSAGE----->>>>> Hash: SHA1>>>>>>>>>> On 05/29/2014 01:06 PM, Eric W. Biederman wrote:>>>>>> Marian Marinov <mm@1h.com> writes:>>>>>>>>>>>>> Hello,>>>>>>>>>>>>>> I have the following proposition.>>>>>>>>>>>>>> Number of currently running processes is accounted at the root user namespace. The problem I'm facing is that>>>>>>> multiple containers in different user namespaces share the process counters.>>>>>>>>>>>> That is deliberate.>>>>>>>>>> And I understand that very well ;)>>>>>>>>>>>>>>>>>> So if containerX runs 100 with UID 99, containerY should have NPROC limit of above 100 in order to execute any >>>>>>> processes with ist own UID 99.>>>>>>>>>>>>>> I know that some of you will tell me that I should not provision all of my containers with the same UID/GID maps,>>>>>>> but this brings another problem.>>>>>>>>>>>>>> We are provisioning the containers from a template. The template has a lot of files 500k and more. And chowning>>>>>>> these causes a lot of I/O and also slows down provisioning considerably.>>>>>>>>>>>>>> The other problem is that when we migrate one container from one host machine to another the IDs may be already>>>>>>> in use on the new machine and we need to chown all the files again.>>>>>>>>>>>> You should have the same uid allocations for all machines in your fleet as much as possible. That has been true>>>>>> ever since NFS was invented and is not new here. You can avoid the cost of chowning if you untar your files inside>>>>>> of your user namespace. You can have different maps per machine if you are crazy enough to do that. You can even>>>>>> have shared uids that you use to share files between containers as long as none of those files is setuid. And map>>>>>> those shared files to some kind of nobody user in your user namespace.>>>>>>>>>>.>> I was thinking about "lightweight mapping" which is simple shifting. Since> we're trying to make this co-work with user-ns mappings, simple uid/gid shift> should be enough. Please, correct me if I'm wrong.>> If I'm not, then it looks to be enough to have two per-sb or per-mnt values> for uid and gid shift. Per-mnt for now looks more promising, since container's> FS may be just a bind-mount from shared disk.>>>.>> User-space crossing? From my point of view it would be enough if we just turn> uid/gid read from disk (well, from whenever FS gets them) into uids, that would> match the user-ns's ones, this sould cover the VFS layer and related syscalls> only, which is, IIRC stat-s family and chown.>> Ouch, and the whole quota engine :\And posix acls.But all of this is 90% done already. I think today we just haveconversions to the initial user namespace. We just need a few tweaks toallow it and a per superblock user namespace setting.Eric
https://lkml.org/lkml/2014/6/3/628
CC-MAIN-2020-16
refinedweb
511
72.87
Trouble with model for calculating and adding table values dependent on multiple rows Jon Swanson Ranch Hand Joined: Oct 10, 2011 Posts: 206 posted Dec 02, 2011 13:21:06 0 I've been over in the beginners forum working on getting a model for dealing with some complex data interactions. I've started putting a set of classes together, but am losing sight of how I'd ever get the data and results in and out of the table. So I thought I would ask for some advice on the swing side of things. The requirement is that the user enters data in the first four or five columns and the program populates the other two or three columns (one column is optional for the user and needs to be populated if they don't enter a value). For any given row, the three columns that need to be computed require first collecting the rows that have the same values in other fields and then using them to calculate a value that goes in every row used in the calculation. So with these columns- a b c d e f g to calculate (e), I first find the values of d for all the rows where (a,b,c) are the same as this row. I then calculate a value of e and put it in each of those rows. I also calculate a d' but that is not exposed to the user. I then collect all the values of c,d', and e which have the same values of (a,b) as the current row. I use the values of c, d', and e to calculate values for f and g (can only have one d' and e per c). I then add those values to all the rows I used for the calculation. Finally, I take the unique sets of (a,b,e,f) and use that to calculate values that go in another table. I was trying to avoid the expense of looping over all the rows and finding the unique sets of (a,b) or (a,b,c) so I put something together that uses nested hashmaps. For any row, I can enter (a,b,c,d) and in the process of adding the data, all the needed values get computed. But I'm having trouble seeing how to map back to table rows and columns. The basics of the code is shown below. Any advice on the right way to do this would be appreciated. import java.util.*; class DataModel { double result; Map<DataPoint, ValuePoint> data = new HashMap<DataPoint, ValuePoint>(); public void add( DataPoint dataPoint, DataPoint nestedDataPoint, double d, double e ) { if (!data.containsKey(dataPoint)) { data.put(dataPoint, new ValuePoint()); } data.get(dataPoint).add(nestedDataPoint, d, e); updateResult(); } public double[] getRaw( DataPoint dataPoint, DataPoint nestedDataPoint ) { double[] pts = new double[2]; return pts; } public void updateResult() { System.out.println("Compute result"); } } class ValuePoint { double f; double g; Map<DataPoint, NestedValuePoint> nestedData = new HashMap<DataPoint, NestedValuePoint>(); public void add( DataPoint nestedDataPoint, double d, double e ) { if (!nestedData.containsKey(nestedDataPoint)) { nestedData.put(nestedDataPoint, new NestedValuePoint()); } nestedData.get(nestedDataPoint).add(d, e); updateFandG(); } public double getRaw( DataPoint nestedDataPoint, double d ) { return nestedData.get(nestedDataPoint).getRaw(d); } public void updateFandG() { System.out.println("Compute f and g"); } } class NestedValuePoint { double h; double i; List<DataPair> nestedData = new ArrayList<DataPair>(); public void add( double x, double y) { nestedData.add( new DataPair( x, y )); updateHandI(); } public double getRaw(double x) { for (DataPair pair : nestedData) { if (x == pair.get()[0]) return pair.get()[1]; } return Double.NaN; } public double[] getResults() { double[] pts = new double[2]; pts[0] = h; pts[1] = i; return pts; } public void updateHandI() { System.out.println("Compute h and i"); } } class DataPair { double x; double y; DataPair( double xn, double yn ) { x = xn; y = yn; } public void set( double xn, double yn ) { x = xn; y = yn; } public double[] get() { double[] pts = new double[2]; pts[0] = x; pts[1] = y; return pts; } } Ranganathan Kaliyur Mannar Bartender Joined: Oct 16, 2003 Posts: 1097 10 I like... posted Dec 06, 2011 00:56:32 0 Just as you have a DataModel , the JTable has its data in a TableModel. This is an interface which has methods like say, 'getValueAt(row, column)'. At runtime, when the table is rendered (drawn), this method will be called by passing the current row number and the current column number. So, you need to write your own implementation of table model. There is a DefaultTableModel written in the API which uses a Vector to hold the data. There is also a AbstractTableModel written - which you can extend and use your own internal representation and implementation. You can go through thte tutorial to understand table models. You should look to represent a row as a bean (which will hold a,b,c values) and say a 'calculate' method which when invoked will calculate the 'd,e.f' values. And then in the table model, you can decide which to show and which to hide. Ranga. SCJP 1.4, OCMJEA/SCEA 5.0. Jon Swanson Ranch Hand Joined: Oct 10, 2011 Posts: 206 posted Dec 06, 2011 10:27:31 0 Thanks. Maybe I should have been more specific. One thing I did for practice was create a simple table model using an Object[][] array. I created my own table model based on the DefaultTableModel and also created ny own TableModelListener. It was just a test , so I had two rows the user could edit and when one of these cells was changed, I put the sum (if applicable) in a third non-editable column. So I think I understand the basics enough to handle straight-forward cases. The table I have been asked to produce requires that certain columns (optionally) are filled by pulling data from all rows with the same values in the first two columns as the column being updated. Then once the new value is computed, it gets placed in appropriate cells for all the columns with that (a,b) pair. Before I can compute the value, I need to compute another value that requires data from all rows where the first three columns are the same as the current row. Again, I need to write the updated data back to all the rows used in the calculation. Perhaps more problematic, if the cell that is changing is in column A for example, I need to update all the rows that had the old (a,b) value and then update all the rows that have the new (a,b) value. If the value in column c changes, I need to update ol (a,b,c), new (a,b,c) old (a,b) and new (a,b) Because my data has a hierarchy of (a,b) and then c, I created a hashmap using (a,b) as the key and the value is an object that includes a hashmap on c. When an object is instantiated in this data hierarchy, all the needed intermediate values get updated. So in a 'batch' processing mode this works great. For example, it's easy to extract the value associated with any given (a,b) pair ot (a,b,c) triplet. BUT, these classes don't know anything about the table or what rows the data came from. So implementing a getRow or getColumn is problematic. I am trying to learn how to approach this problem in an intelligent fashion. There are lots of options I have been considering. 1. Take my data model class which is now independent of a table representation and add lists indicating what row the information came from. I think I might need to duplicate this at several levels, i.e., add to the (a,b) value class a list of rows, add to the (a,b,c) value class a list of rows, add to the (a,b,c,d,e) value class the specific row that this data item came from (duplicate rows are allowed). I think this could get messy when the user changes an existing value of a or deletes a row, but I think I could end up implementing a getRow and getColumn. 2. Bail on having a separate data model. Instead of trying to be clever, just loop through the table whenever a cell is changed and construct the list of rows that need to be updated on the fly. The only thing that I think this would have trouble with, besides rapid retrieval of the computed values, is knowing what the old value of the row was, so I can update those rows. I'm thinking this might mean keeping two sets of books or some simpler hashmap of what rows were used to calculate the values in the previous iteration. 3. Some other major paradigm change like rebuilding the whole data model each time the table data changes. Then I don't have to worry about adding or changing items in the data model and a fairly simple table model can manage the user input, but building the data model is a bit time intensive. Then when all this is resolved, each time a cell is updated, the request is that a method be run which uses the unique set of (a,b,e,f) and (r,s) from a different table to create or update a graph. The request is the graph updates anytime a value is added or modified in the table. My confusion there is perhaps deepest. Let's say I take my datamodel and add arrays to track rows and get methods. I can tell the table to use it as my table data model. But, when the cell data changes, I have to get data from other tables and update graphs. The order has to be right or the graphs are wrong. That seems to argue for a control program where I would keep a data model for the (a,b,c) data and a data model for the (r,s) data and the control program would be handed the table objects and the graph objects. Then it could register as a listener to both tables (data could also change in the other table or in the graph panes themselves that would require updates all around). But then, if the data is a control class, I'm not sure what the appropriate way is for that control class to communicate with the table that rows and cells need to change or for the table to let the control program know rows are added or deleted. The tutorials are helpful, but cover very simple cases and really I am trying to get my head around the right way to approach a situation where a data change triggers a lot of actions that must be run in a specific order and some of these actions result in the data being changed. I agree. Here's the link: subject: Trouble with model for calculating and adding table values dependent on multiple rows Similar Threads Executable-Java! Intersection points not being determined accurately- java awt Handling key lookup when the units of the data changes Having trouble choosing a data structure 2d arrays help All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/560570/GUI/java/Trouble-model-calculating-adding-table
CC-MAIN-2015-22
refinedweb
1,886
58.11
iMeshGenerator Struct ReferenceiMeshGenerator defines the interface for a mesh generator. More... [Crystal Space 3D Engine] #include <iengine/meshgen.h> Inheritance diagram for iMeshGenerator: Detailed DescriptioniMeshGenerator defines the interface for a mesh generator. Main creators of instances implementing this interface: Main ways to get pointers to this interface: Main users of this interface: - engine Definition at line 161 of file meshgen.h. Member Function Documentation Add a mesh on which we will map our geometry. Create a geometry specification for this mesh generator. Get the block count. Get the cell count. Get a specific geometry. Get the number of geometry specifications. Get a specific mesh. Get the number of meshes. Get the sample box. Remove a geometry. Remove a mesh. Set the alpha scale. If this is set then objects in the distance will use alpha mode. - Parameters: - Set the maximum number of blocks to keep in memory at the same time. A block contains generated positions. Generating a block may be expensive (depending on density and size of the cells) so it may be good to have a high number here. Having a high number means more memory usage though. Default is 100. Set the number of cells to use in one direction. Total cells will be 'number*number'. A cell is a logical unit that can keep a number of generated positions. Using bigger (fewer) cells means that more positions are generated at once (possibly causing hickups when this happens). Smaller cells may mean more runtime overhead. Default is 50. Set the density scale. If this is set then objects in the distance can have a lower density. - Parameters: - The documentation for this struct was generated from the following file: Generated for Crystal Space 1.0.2 by doxygen 1.4.7
http://www.crystalspace3d.org/docs/online/api-1.0/structiMeshGenerator.html
CC-MAIN-2014-15
refinedweb
293
53.98
This guide shows you how to perform common scenarios using the Windows Azure Table storage service. The samples are written written using the Python API. The scenarios covered include creating and deleting a table, inserting and querying entities in a table. For more information on tables, see the Next Steps section. What is the Table Service? Concepts Create a Windows Azure Storage Account How To: Create a Table How To: Add an Entity to a Table How To: Update an Entity How To: Change a Group of Entities How To: Query for an Entity How To: Query a Set of Entities How To: Query a Subset of Entity Properties How To: Delete an Entity How To: Delete a Table Next Steps: You can use the Table service to store and query huge sets of structured, non-relational data, and your tables will scale as demand increases. The Table service contains the following components: URL format: Code addresses tables in an account using this address format: http://<storage account>.table.core.windows.net/<table> <storage account> <table> You can address Azure tables directly using this address with the OData protocol. For more information, see OData.org Storage Account: All access to Windows Azure Storage is done through a storage account. The total capacity of a storage account for all blob, table, and queue data is 200TB for storage accounts created on or after June 8th, 2012; for storage accounts created before that date, total capacity is 100TB. See Windows Azure Storage Scalability and Performance Targets for details about storage account capacity.. To use storage operations, you need a Windows Azure storage account. You can create a storage account by following these steps. (You can also create a storage account using the REST API.) Log into the Windows Azure Management Portal. At the bottom of the navigation pane, Windows Azure application, select the same region where you will deploy your application. Optionally, you can enable geo-replication. Click CREATE STORAGE ACCOUNT. Note: If you need to install Python or the Client Libraries, please see the Python Installation Guide. The TableService object lets you work with table services. The following code creates a TableService object. Add the following near the top of any Python file in which you wish to programmatically access Windows Azure Storage: from azure.storage import * The following code creates a TableService object a PartitionKey and RowKey. These are the unique identifiers of your entities, and are values that can be queried much faster than your other properties. The system uses PartitionKey to automatically distribute the table’s entities over many storage nodes. Entities with the same PartitionKey are stored on the same node. The RowKey is the unique ID of the entity within the partition doesn’t exist, then the update operation will fail. If you want to store an entity regardless of whether it already the PartitionKey and RowKey. task = table_service.get_entity('tasktable', 'tasksSeattle', '1') print(task.description) print(task.priority) This example finds all tasks in Seattle based on you would like to bring over to the client. The query in the following code only returns the Descriptions of entities in the table. Please note that the following snippet only works against the cloud storage service, this not supported by the Storage Emulator. tasks = table_service.query_entities('tasktable', "PartitionKey eq 'tasksSeattle'", 'description') for task in tasks: print(task.description) You can delete an entity using its partition and row key. table_service.delete_entity('tasktable', 'tasksSeattle', '1') The following code deletes a table from a storage account. table_service.delete_table('tasktable') Now that you’ve learned the basics of table storage, follow these links to learn how to do more complex storage tasks.
http://www.windowsazure.com/en-US/develop/python/how-to-guides/table-service/
CC-MAIN-2013-48
refinedweb
612
55.44
Different models of diffusion MRI can be compared based on their accuracy in fitting the diffusion signal. Here, we demonstrate this by comparing two models: the diffusion tensor model (DTI) and Constrained Spherical Deconvolution (CSD). These models differ from each other substantially. DTI approximates the diffusion pattern as a 3D Gaussian distribution, and has only 6 free parameters. CSD, on the other hand, fits many more parameters. The models aare also not nested, so they cannot be compared using the log-likelihood ratio. A general way to perform model comparison is cross-validation [Hastie2008]. In this method, a model is fit to some of the data (a learning set) and the model is then used to predict a held-out set (a testing set). The model predictions can then be compared to estimate prediction error on the held out set. This method has been used for comparison of models such as DTI and CSD [Rokem2014], and has the advantage that it the comparison is imprevious to differences in the number of parameters in the model, and it can be used to compare models that are not nested. In DIPY, we include an implementation of k-fold cross-validation. In this method, the data is divided into \(k\) different segments. In each iteration \(\frac{1}{k}th\) of the data is held out and the model is fit to the other \(\frac{k-1}{k}\) parts of the data. A prediction of the held out data is done and recorded. At the end of \(k\) iterations a prediction of all of the data will have been conducted, and this can be compared directly to all of the data. First, we import that modules needed for this example. In particular, the reconst.cross_validation module implements k-fold cross-validation import numpy as np np.random.seed(2014) import matplotlib.pyplot as plt import dipy.data as dpd import dipy.reconst.cross_validation as xval import dipy.reconst.dti as dti import dipy.reconst.csdeconv as csd import scipy.stats as stats from dipy.core.gradients import gradient_table from dipy.io.image import load_nifti from dipy.io.gradients import read_bvals_bvecs We fetch some data and select a couple of voxels to perform comparisons on. One lies in the corpus callosum (cc), while the other is in the centrum semiovale (cso), a part of the brain known to contain multiple crossing white matter fiber populations.) cc_vox = data[40, 70, 38] cso_vox = data[30, 76, 38] We initialize each kind of model: dti_model = dti.TensorModel(gtab) response, ratio = csd.auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7) csd_model = csd.ConstrainedSphericalDeconvModel(gtab, response) Next, we perform cross-validation for each kind of model, comparing model predictions to the diffusion MRI data in each one of these voxels. Note that we use 2-fold cross-validation, which means that in each iteration, the model will be fit to half of the data, and used to predict the other half. dti_cc = xval.kfold_xval(dti_model, cc_vox, 2) csd_cc = xval.kfold_xval(csd_model, cc_vox, 2, response) dti_cso = xval.kfold_xval(dti_model, cso_vox, 2) csd_cso = xval.kfold_xval(csd_model, cso_vox, 2, response) We plot a scatter plot of the data with the model predictions in each of these voxels, focusing only on the diffusion-weighted measurements (each point corresponds to a different gradient direction). The two models are compared in each sub-plot (blue=DTI, red=CSD). fig, ax = plt.subplots(1, 2) fig.set_size_inches([12, 6]) ax[0].plot(cc_vox[gtab.b0s_mask == 0], dti_cc[gtab.b0s_mask == 0], 'o', color='b', label='DTI in CC') ax[0].plot(cc_vox[gtab.b0s_mask == 0], csd_cc[gtab.b0s_mask == 0], 'o', color='r', label='CSD in CC') ax[1].plot(cso_vox[gtab.b0s_mask == 0], dti_cso[gtab.b0s_mask == 0], 'o', color='b', label='DTI in CSO') ax[1].plot(cso_vox[gtab.b0s_mask == 0], csd_cso[gtab.b0s_mask == 0], 'o', color='r', label='CSD in CSO') ax[0].legend(loc='upper left') ax[1].legend(loc='upper left') for this_ax in ax: this_ax.set_xlabel('Data (relative to S0)') this_ax.set_ylabel('Model prediction (relative to S0)') fig.savefig("model_predictions.png") We can also quantify the goodness of fit of the models by calculating an R-squared score: cc_dti_r2 = stats.pearsonr(cc_vox[gtab.b0s_mask == 0], dti_cc[gtab.b0s_mask == 0])[0]**2 cc_csd_r2 = stats.pearsonr(cc_vox[gtab.b0s_mask == 0], csd_cc[gtab.b0s_mask == 0])[0]**2 cso_dti_r2 = stats.pearsonr(cso_vox[gtab.b0s_mask == 0], dti_cso[gtab.b0s_mask == 0])[0]**2 cso_csd_r2 = stats.pearsonr(cso_vox[gtab.b0s_mask == 0], csd_cso[gtab.b0s_mask == 0])[0]**2 print("Corpus callosum\n" "DTI R2 : %s\n" "CSD R2 : %s\n" "\n" "Centrum Semiovale\n" "DTI R2 : %s\n" "CSD R2 : %s\n" % (cc_dti_r2, cc_csd_r2, cso_dti_r2, cso_csd_r2)) This should look something like this: Corpus callosum DTI R2 : 0.782881752597 CSD R2 : 0.805764364116 Centrum Semiovale DTI R2 : 0.431921832012 CSD R2 : 0.604806420501 As you can see, DTI is a pretty good model for describing the signal in the CC, while CSD is much better in describing the signal in regions of multiple crossing fibers. Hastie, T., Tibshirani, R., Friedman, J. (2008). The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer-Verlag, Berlin Rokem, A., Chan, K.L. Yeatman, J.D., Pestilli, F., Mezer, A., Wandell, B.A., 2014. Evaluating the accuracy of diffusion models at multiple b-values with cross-validation. ISMRM 2014. Example source code You can download the full source code of this example. This same script is also included in the dipy source distribution under the doc/examples/ directory.
https://dipy.org/documentation/1.4.1./examples_built/kfold_xval/
CC-MAIN-2021-25
refinedweb
912
52.66
1. Connect to the net and type "yum install gcc" in a terminal/command prompt window withouts quotes. This will install the open source compiler called the gcc. This can compile c/c++, java, php etc. Restart the pc. 2. Open up a text editor from the menu there are plenty such as Kwrite etc. 3. Save this code in it with a file name helloworld.c #include < stdio.h> void main() { printf("\nHello World\n"); } 4. Make sure you give the extension .c 5. Now go to any terminal/command prompt of your choice. Type "gcc -o helloworld helloworld.c" without quotes. Then execute the program by typing in the same terminal after the above command "./helloworld.c" without quotes. Let me know if this works. Do not lose patience. I am also learning on my own at my own pace. Reply ASAP. Good luck and god bless. Bookmarks
http://www.linuxhomenetworking.com/forums/showthread.php/17856-compiling-c-programs-in-suse-linux?p=137350&viewfull=1
CC-MAIN-2013-48
refinedweb
150
88.94
In one of my projects, I had to develop a web service and deliver the setup to the client. I faced problems while changing the Target Web Service as the client will install it anywhere in his machine with any name. With this article, I am trying to make it easy to understand for a person who is beginner to Web Services (just like me :-D), how to change the Target Web Service. One of the most simplest ways of adding a web service to a client application is by adding a web reference to the web service by specifying the URL of the .asmx file. This generates the required proxy object, that's what VS.NET takes care of. a configurable URL so that even if the original web service is moved, your client applications need not be recompiled. In this article, we will see how to do just that. I will create two web services here, one of which will have direct call and another web service will look into the web.config file for the reference. For our example, we will develop a simple web service that has only one method. The following steps will show you how to proceed. using System; using System.Web.Services; namespace HWWebService { public class HWClass : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello Application, from Web Service"; } } } HWClass) contains a single method called HelloWorld()that returns a string. using System; using System.Web.Services; namespace HWWebService { public class AnotherService : System.Web.Services.WebService { [WebMethod]public string AnotherHelloWorld() { return "Hello World, from Another Service"; } } } AnotherService. Also, it returns a different string from AnotherHelloWorld()method so that you can identify the method call. Let us build a simple web client for our web service. txtReturnWSand txtAnotherWS. tbnWSand btnAnotherWS. Add the following code in the Clickevent of the button: private void btnWS_Click(object sender, System.EventArgs e) { HelloWorld.HWServiceClass objProxy = new HelloWorld.HWServiceClass(); objProxy.Url = GetHWServiceURL(); txtReturnWS.Text = objProxy.HelloWorld(); } private void btnAnotherWS_Click(object sender, System.EventArgs e) { AnotherWorld.AnotherService objProxy = new AnotherWorld.AnotherService(); txtAnotherWS.Text = objProxy.AnotherHelloWorld(); } Urlproperty of the objProxyclass to the required .asmx file. <appSettings>section of the web.config file and retrieve it at run time. Now, even if you move your web service, all you need to do is change its URL in the web.config. The following code shows this: private void btnWS_Click(object sender, System.EventArgs e) { localhost.HWClass objProxy=new localhost.HWClass; objProxy.Url=GetHWServiceURL(); Response.Write(objProxy.HelloWorld()); } public string GetHWServiceURL() { return System.Configuration.ConfigurationSettings.AppSettings["HWServiceURL"]; } <appSettings> <add key="HWServiceURL" value="" /> </appSettings> Note that in order for the above code to work correctly, both web services should have exactly same web method signatures. Hope it will be useful to the team! Happy dotnetting! Het Waghela Het Waghela, Blog | Het Waghela DotNet Questions Link | Het Waghela Resume | More Links General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cpp/WebServiceCall.aspx
crawl-002
refinedweb
483
51.34
sys/wait.h - declarations for waiting #include <sys/wait.h> The <sys/wait.h> header defines the following symbolic constants for use with waitpid(): - WNOHANG - Do not hang if no status is available, return immediately. - WUNTRACED - Report status of stopped child process. and the following macros for analysis of process status values: - WEXITSTATUS() Return exit status.Return exit status. - WIFCONTINUED() True if child has been continuedTrue if child has been continued - WIFEXITED() True if child exited normally.True if child exited normally. - WIFSIGNALED() True if child exited due to uncaught signal.True if child exited due to uncaught signal. - WIFSTOPPED() True if child is currently stopped.True if child is currently stopped. - WSTOPSIG() Return signal number that caused process to stop.Return signal number that caused process to stop. - WTERMSIG() Return signal number that caused process to terminate.Return signal number that caused process to terminate. The following symbolic constants are defined as possible values for the options argument to waitid(): -. The type idtype_t is defined as an enumeration type whose possible values include at least the following:P_ALL P_PID P_PGID The id_t type is defined as described in <sys/types.h>. The siginfo_t type is defined as described in <signal.h>. The rusage structure is defined as described in <sys/resource.h>. The pid_t type is defined as described in <sys/types.h>. Inclusion of the <sys/wait.h> header may also make visible all symbols from <signal.h> and <sys/resource.h>. The following are declared as functions and may also be defined as macros. Function prototypes must be provided for use with an ISO C compiler. pid_t wait(int *); pid_t wait3(int *, int, struct rusage *); int waitid(idtype_t, id_t, siginfo_t *, int); pid_t waitpid(pid_t, int *, int); None. None. wait(), waitid(). <sys/resource.h>, <sys/types.h>.
http://pubs.opengroup.org/onlinepubs/7990989775/xsh/syswait.h.html
CC-MAIN-2014-42
refinedweb
299
61.43
Functional Programming Is Not What You (Probably) Think Functional Programming Is Not What You (Probably) Think Learn some common misconceptions about functional programming side effects and how functional programming truly differs from the imperative style. Join the DZone community and get the full member experience.Join For Free After seeing many of the comments from another article attempting to explain what FP (Functional Programming) is about, I thought I would take another attempt at my own explanation. First, just a little background about me: I am currently a Quality Engineer at Red Hat, and I have been writing Clojure off and on (mostly off) since 2010. I have just recently begun a journey learning PureScript which is (sort of) a dialect of Haskell that targets a JavaScript engine. My goal in this article is to give yet another explanation of FP and give at least a notion of the differences between "mostly" FP from pure FP. I welcome any comments, questions, or corrections (especially with regards to pure functional programming, which I am still fairly new at). Definition of a Pure Function So in the previous article mentioned above, the author wanted to convey the notion of what a pure function is: namely, a function which, given the same input, always yields the same output and without any side effects. A side effect can include mutating any variable or affecting the "outside" world (for example writing to a file or stdio). A related but orthogonal concept to a pure function is a total function, which perhaps I'll cover in another article. It must also be mentioned that technically, a mathematical function is a function of one argument, and it must map to one and only one result. Most pure FP languages take functions which appear to take multiple arguments and convert them to functions of one argument (which returns another function, which takes one arg). This process is called currying. Note that currying is not a technical requirement of a pure function, since we can always curry a function anyhow. Here's an example of currying in JavaScript: let fn = (a, b, c) => { return a / (b - c); } let curriedFn = a => { return b => { return c => { return a / (b - c); }; }; }; let firstPartial = curriedFn(10); let secondPartial = firstPartial(4); console.log(secondPartial(2)); // or equivalently console.log(curriedFn(10)(4)(2)); // here's a more compact version which bares some resemblance to haskell/purescript :) let curryfn = a => b => c => a / (b - c) /* purescript version let curryfn :: Int -> Int -> Int -> Int curryfn a b c = a / (b - c) */ The Real Meaning of "Side Effect" Going over the comments from that article, I realized that one of the big misunderstandings people have about pure functions is the notion of the term "side effect." In colloquial English, a "side effect" is an unintended consequence of an action. But in computer science, a side effect is any action that affects or alters the outside world, including the parameters passed to the function. This leads to confusion when an FP advocate says a function is effectful, but someone new to FP says, "But the function's intended behavior is to change the state! So it's not a side effect, just an effect." Problem is, that's not what side effect means from a computer science point of view. An Effectful Example So let's look at a concrete example of an effectful class in Java. Here's some sample Java code: public class SideEffecting { private int level = 0; private String state = "normal"; public int increase(int val) { return level += val; } private String determineLevel() { if (level < 10) state = "normal"; else state = "too high"; return state; } public int makeSafe(int decrementBy) { // loop by the decrement by val until we're back to normal levels while (determineLevel().equals("too high")) { System.out.println(String.format("Warning! Level of %d is too high, attempting to settle...", this.level)); level = level - decrementBy; state = determineLevel(); } // return the level at which we're safe again System.out.println("System is back to normal levels now"); return level; } public static void main(String[] args) { SideEffecting seff = new SideEffecting(); seff.increase(24); seff.makeSafe(5); System.out.println(String.format("The safe level is now %d", seff.level)); } } Here, we have a class that has some internal state representing a level, which is either normal or too high. If we set the level too high, we can call makeSafe to settle the system back down to normal. Hopefully, this code is relatively straightforward to understand. Obviously, we are keeping track of state internally in the object, and this state is implicitly used and changed by the methods, and thus this code is side effectful. Why is it side effectful? Isn't my "intention" of the increase function to increase the level field? That's the colloquial interpretation of the term "side effect," but that's not what is meant by FP programming. The side effect here is that relative to the increase function, it has mutated the outside world (the internal field level). One may argue that the level field is private, so it is properly encapsulated. But what if I throw in another thread calling makeSafe? Or later on, I add another method that changes level? But there's even more effects lurking here. The state is also being mutated by the determineLevel function, and even the System.out.println calls are side effects. Any kind of IO is also a side effect, because you can never be sure if your filesystem, socket, or even console is available. Recall that the definition of a pure function is that given the same input, it must always yield the same answer. With any kind of IO, you can not be guaranteed this. Another thread may have blown away your file (or file system), the console may have been redirected, or a socket may time out for example. An Example Without Effects So how can we make something similar but with pure functions? public class NoSideEffects { public static class Tuple<F, S> { public F first; public S second; public Tuple(F f, S s) { this.first = f; this.second = s; } } private String determineLevel(int current) { if (current < 10) return "normal"; else return "too high"; } public Tuple<Integer, String> makeSafe(int currentLevel, int decrementBy, String log) { // recursively decrement by val until we're back to normal levels if (this.determineLevel(currentLevel).equals("too high")) { log = log + String.format("Warning! Level of %d is too high, attempting to settle...\n", currentLevel); return makeSafe(currentLevel - decrementBy, decrementBy, log); } // return the level at which we're safe again log = log + String.format("The safe level is now %d", currentLevel); return new Tuple<>(currentLevel, log); } public static void main(String[] args) { NoSideEffects noseff = new NoSideEffects(); Tuple<Integer, String> start = new NoSideEffects.Tuple<>(24, ""); Tuple<Integer, String> result = noseff.makeSafe(start.first, 5, start.second); System.out.println(result.second); } } First off, I hope the above example has shown that OOP and FP are not mutually exclusive. For some reason, many people think FP is "against" OOP. What FP is "against" is uncontrolled mutation and side effects (I put "against" in quotes because totally eliminating side effects is impossible). In fact, the astute reader may notice that all of the methods may have been declared static with no loss of computation equivalence. So what has changed here? Probably the most obvious change is that inner Tuple class. I'll explain that in a moment. The next change that has occurred is that the "state" is being passed to the function explicitly now instead of implicitly as a field inside an object. For example level is no longer a field inside the class, it's now a parameter passed to functions. In fact, we don't even need the increase function anymore. Notice that the makeSafe function uses recursion instead of a while loop. This is quite frequent in functional programming since recursion does not rely upon some kind of hidden inner state which is mutated and affects the control flow of a for or while loop. Some may argue that this is inefficient and creates more overhead by pushing a new function on the call stack. And depending on the language, they would be correct. Unfortunately, Java does not support Tail Call Optimization (TCO) yet wherein a function which recurses in the tail position (as makeSafe does) is effectively optimized to a loop, but other languages on the JVM can do so such as Scala or Clojure. So depending on your language, do not shy away from recursive solutions to problems and even if your language doesn't support TCO, very often you will know that your problem will be bounded and not suffer a stack overflow. So, about that Tuple. Notice that in the makeSafe call, instead of calling System.out.println, it is accumulating a logging value on every call. It is only in main where we get to see what happened during the makeSafe function call. The makeSafe returns a Tuple of two elements, the first being an integer value which is our current level, and the second being a string which is the accumulation of our logging results. For those familiar with haskell or purescript, they may recognize this as something similar to the Writer monad. The Advantages of Purity So there is only one place in the NoSideEffect class where we have an effect of any kind, and yet the result is the same. Also, because makeSafe and determineLevel are pure functions, they are immune to time. It does not matter when or what computer you run these functions on, given the same input, they will always yield the same output. If you call these functions in different threads, each thread will operate on different values (which may or may not be what you want...in this case spawning another thread calling makeSafe will not make it get to a safe level twice as fast for example). Also, since there is no IO going on in makeSafe, you never have to worry that your console got redirected or went away (or if I rewrote this to log to a file, that the filesystem went away or someone changed file permissions under your feet). "A-ha!! But your main called System.out.println, so your so-called pure program isn't pure!! Got you!!" Well, yes :). But I can test makeSafe in isolation, asserting the accumulated log has a certain value. Pure functional programming does not deny side-effectful functions. Perhaps you've heard of Monads before. Well, there are Monads for stateful computations with the ST Monad. However, do not think that Monads are "impure" (or only about IO, or only about state). Perhaps one day I'll write Yet Another Monad Tutorial. Also, every program's main has to eventually do IO, otherwise there would be no way to know the result of your program. PureScript calls it Eff (for Effect), but basically it says that the program (or function) has some kind of effect. The trick in pure FP is that you have a pure part of computation which is lifted into a larger impure language. Pure FP languages keep track of what parts of your program are pure and which are impure. This is in contrast to mostly FP languages (like Clojure) for example, where you can never be sure if a function is truly pure or not and instead impure functions are usually marked with something in their function names like an exclamation point. (def foo (atom 0)) (defn inc-by-10-foo! [foo] (swap! foo #(+ % 10))) (let [first10 (inc-by-10-foo! foo) second10 (inc-by-10-foo! foo)] (println first10) (println second10)) There are some other considerations here. The makeSafe function is accumulating the log value in a string. In Java, strings are immutable, but if you have a really long running function, it might not be a good idea to store the string in an ever increasing parameter (not to mention all the garbage collection that will be happening). You may want to use a StringBuffer, even though StringBuffers mutate their value. Ultimately, even a pure FP language could potentially throw an out of memory error even on a "pure" function (and pure functions aren't supposed to have side effects like throwing exceptions). Again, pure style FP programming is not about disallowing side effects, it's about keeping track of them and by knowing what and how to compose pure and impure functions together. Tracking (Im)Purity Another common misperception I saw from the comments in the other article was that, ultimately, a function calls a function, which calls a function (etc., etc.) so it's impossible to keep track of whether a function is pure or not. This is not true if you are programming in a pure FP language like Haskell, PureScript, or Idris. The type system ensures that you have a pure function or not. This goes back to the real benefit of pure FP systems, which is in knowing what parts of your program are pure and which are not. This is the hallmark difference IMHO between "mostly" pure FP languages and pure FP languages. To put it another way, the purity of a function is part of the type information. The parts of your program that are impure need more testing because you don't know if something might fail outside of your control. Even pure FP languages have the need to read files (for config data, for example), write to files (logging data for example), or sometimes to keep track of shared state between multiple processes of execution. Here's an example of some PureScript code where the purity of the function is clearly marked by its type: -- This example is in purescript and handles data events from an underlying Readable -- Stream object from nodejs. import Prelude import Control.Monad.Eff (Eff) import Control.Monad.Eff.Ref (Ref, REF, modifyRef) import Node.Stream (onData) import Node.Buffer (Buffer, BUFFER, toString) -- | Takes a mutable Ref of type String which will accumulate data from a Readable -- | Buffer. On each data event, accumulate the Buffer contents to this ref onDataSave :: forall e . Ref String -> Buffer -> Eff ( ref :: REF, buffer :: BUFFER, console :: CONSOLE | e) Unit onDataSave ref buff = do bdata <- toString UTF8 buff modified <- modifyRef ref \current -> current <> bdata --log $ "Data as of now:\n" <> modified pure unit A reader of this function knows it is impure because the return type is an Eff (which has the effects that it uses a mutable reference, a mutable buffer, and an IO console) and which returns Unit. If you ask, "Well, what if I simply didn't declare the type to return Eff?" then that is impossible. The compiler knows, for example, that since I am using a Ref and a Buffer, those are effectful types. Since the compiler enforces and keeps track of the (im)purity of functions for you, you will know what parts of your program are pure or not. In fact, PureScript, like Haskell, uses type inference, and had I failed to declare the type of onDataSave, the PureScript compiler would have inferred the type for me and would have indicated what effects it had! Hopefully, this simple example shows you that even pure FP languages can handle the impure and stateful world. Pros and Cons of FP One may ask what benefits programming in this style brings. As I have already mentioned, writing in a pure FP style is: safer because pure functions are automatically thread safe. easier to reason about with pure functions, since you do not need to be aware of any hidden implicit state that can be changed under your feet easier to test pure functions because you do not need mocks to simulate or eliminate side effects helpful to know what functions are impure so it will be more obvious where QA should spend the majority of its testing resources safer to compose (in typed pure FP) because you will know that the output of one function is compatible with the input of another, and there are no locks to order appropriately This is why I recommend that everyone should learn a pure functional language. Even if you already know a "mostly" FP language like Scala, Clojure, or elixir, I encourage you to try out Haskell or PureScript. It won't be easy, but the effort will be paid back in learning how to solve problems in a different manner. Are there downsides to FP? Generally speaking, FP can be slower than imperative programming. Mutating a data structure is faster than even persistent data structures. For example, most FP languages use special kinds of data structures that are immutable. If you want to "set" a key to a new value in a persistent map, you call a function that "sets" the key to the new value, and what is returned is a new map with the key set to that value, rather than mutating the map in-place. This sounds expensive, and it would be if you had to copy every single item in the map. The trick is that you only have to change what is necessary leaving the rest untouched. But even this has logarithmic complexity versus an O(1) operation to mutate a structure in place. Also, as mentioned in the warning about the accumulated log value, you have to be careful thinking what kind of data structure to use when accumulating values. Also, FP often (but not always) uses currying, which is a way to represent functions of multiple arguments as functions that return functions. This can result in extra function call overhead. "But I thought you said one of FP's benefit is concurrency? Won't that make FP faster?" My not-so-simple answer is... it depends. If your data can be split up without any kind of dependency or sharing, then yes, the speed up will be dramatic. For example, working with matrices will be a breeze—where you can split up the matrices into separate vectors and have them operated on independently. But for other kinds of workloads, waiting for data to "settle down" will cause some contention (as, for example, how STM systems work) as the process has to retry any computations where data has been changed. There will probably be some speed up, but will it be enough to counter the inherent speed advantage that imperative languages have? I'd say you won't really know until you try. Just be mindful that FP doesn't necessarily bring automatic performance advantages. But what it does bring is easier to reason about concurrency, and more importantly, unlike locks or mutexes, concurrent functions in FP are easier to compose with each other (because there are no locks, the ordering of acquiring/releasing locks does not matter which is a big reason you can't compose functions utilizing locks easily). All things have their pros and cons, but I believe that pure FP is the way to go to solve tomorrow's problems. It makes concurrency and parallelism easier (though still only in comparison to imperative style), the separation of pure and impure functions makes it easier to know where to spend your QA "budget," and pure functions are easier to reason about because there are no hidden implicit factors to think about. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/functional-programming-is-not-what-you-probably-th
CC-MAIN-2019-30
refinedweb
3,249
60.14
Red Hat Bugzilla – Bug 61226 Code produced by g++3 segfaults leaving no corefile Last modified: 2007-04-18 12:40:57 EDT Description of Problem: After updating my gcc3 suite to 3.0.4-1 one of my programs has started to segfault immidiately after starting. It doesn't even seem to enter my code, but crashes before main() and leaves no core. Version-Release number of selected component (if applicable): 3.0.4-1 (Newest update from redhat) How Reproducible: Always Steps to Reproduce: 1. Compile program using g++3 2. Run program Actual Results: The program segfaults instantly Expected Results: The program should work Additional Information: Created attachment 48644 [details] Output from trying to run the in a GDB session I find that Hello World will segfault if linked static but dynamic is OK: #include <iostream> #include <stdexcept> using std::cout; main() { cout << "hello world\n"; } I consider this very high priority. Fixed in gcc 3.2.
https://bugzilla.redhat.com/show_bug.cgi?id=61226
CC-MAIN-2016-40
refinedweb
160
53.31
Function pointer comparison becomes significant when we want to figure out if two function pointers are pointing to the same function. Function pointer comparison is important for one more use case, where we compare a function pointer with the NULL pointer to check if a function pointer is assigned any function name or not. There is no pointer arithmetic or relational operations are applied on function pointers because they would semantically serve no purpose. We see no use case of adding one function pointer to another or checking if one function pointer is less than the other. Two function pointers can be compared with the == and != operators, just like any other kind of pointers. We can also compare a function pointer to the NULL pointer using the == and != operators. Function pointer comparisons with the <, <=, >, and >= operators yield undefined behaviour, if the two pointers are unequal. Following C program demonstrates how to compare two function pointers if they are equal and unequal to each other. #include <stdio.h> int retMax (int n1, int n2) { return (n1 > n2) ? n1 : n2; } int main () { //declare two function pointers int (*ptrMaxFunctin)(int, int); int (*ptrMaxFunctin2)(int, int); //assign function address to pointer ptrMaxFunctin = retMax; // assign NULL to function pointer ptrMaxFunctin2 = NULL; if (ptrMaxFunctin2 == NULL) { printf("ptrMaxFunctin2 has not been assigned yet.\n"); } if (ptrMaxFunctin2 != ptrMaxFunctin) { printf("ptrMaxFunctin2 and ptrMaxFunctin do not point to the same function.\n"); } else { printf("ptrMaxFunctin2 and ptrMaxFunctin point to the same function.\n"); } ptrMaxFunctin2 = retMax; if (ptrMaxFunctin2 == ptrMaxFunctin) { printf("ptrMaxFunctin2 and ptrMaxFunctin point to the same function.\n"); } else { printf("ptrMaxFunctin2 and ptrMaxFunctin do not point to the same function.\n"); } int qty1 = 20, qty2 = 50; printf ("Max of %d and %d is : %d \n", qty1, qty2, (*ptrMaxFunctin)(qty1, qty2)); return 0; } OUTPUT ====== ptrMaxFunctin2 has not been assigned yet. ptrMaxFunctin2 and ptrMaxFunctin do not point to the same function. ptrMaxFunctin2 and ptrMaxFunctin point to the same function. Max of 20 and 50 is : 50 Can a function pointer store the address of any function? Yes, it can. This is purpose of casting function pointers, just like usual pointers. We can cast a function pointer to another function pointer type but cannot call a function using casted pointer if the function pointer is not compatible with the function to be called. If a function pointer is casted to a function pointer of a different type and called, it results into undefined behaviour. In other words the behaviour of a function pointer will be undefined if a function pointer is used to call a function whose type is not compatible with the pointed-to type. As a conclusion, we have to cast a function pointer back to the original type in order to call a function, if we had casted a function pointer to a different function pointer type earlier. Here, read more on Function Pointers in C. Hope you have enjoyed reading function pointer comparison and their type-casting.
http://cs-fundamentals.com/tech-interview/c/how-to-compare-and-cast-function-pointers.php
CC-MAIN-2017-17
refinedweb
488
54.32
Multi-Word Token (MWT) Expansion Table of contents Description The Multi-Word Token (MWT) expansion module can expand a raw token into multiple syntactic words, which makes it easier to carry out Universal Dependencies analysis in some languages. This was handled by the MWTProcessor in Stanza, and can be invoked with the name mwt. The token upon which an expansion will be performed is predicted by the TokenizeProcessor, before the invocation of the MWTProcessor. For more details on why MWT is necessary for Universal Dependencies analysis, please visit the UD tokenization page. Options Example Usage The MWTProcessor processor only requires TokenizeProcessor to be run before it. After these two processors have processed the text, the Sentences will have lists of Tokens and corresponding syntactic Words based on the multi-word-token expander model. The list of tokens for a sentence sent can be accessed with sent.tokens, and its list of words with sent.words. Similarly, the list of words for a token token can be accessed with token.words. Accessing Syntactic Words for Multi-Word Token Here is an example of a piece of text in French that requires multi-word token expansion, and how to access the underlying words of these multi-word tokens: import stanza nlp = stanza.Pipeline(lang='fr', processors='tokenize,mwt') doc = nlp('Nous avons atteint la fin du sentier.') for token in doc.sentences[0].tokens: print(f'token: {token.text}\twords: {", ".join([word.text for word in token.words])}') As a result of running this code, we see that the word du is expanded into its underlying syntactic words, de and le. token: Nous words: Nous token: avons words: avons token: atteint words: atteint token: la words: la token: fin words: fin token: du words: de, le token: sentier words: sentier token: . words: . Accessing Parent Token for Word When performing word-level annotations and processing, it might sometimes be useful to access the token a given word is derived from, so that we can access its character offsets, among other things, that are associated with the token. Here is an example of how to do that with Word’s parent property with the same sentence we just saw: import stanza nlp = stanza.Pipeline(lang='fr', processors='tokenize,mwt') doc = nlp('Nous avons atteint la fin du sentier.') for word in doc.sentences[0].words: print(f'word: {word.text}\tparent token: {word.parent.text}') As one can see in the result below, Words de and le have the same parent token du. word: Nous parent token: Nous word: avons parent token: avons word: atteint parent token: atteint word: la parent token: la word: fin parent token: fin word: de parent token: du word: le parent token: du word: sentier parent token: sentier word: . parent token: . Training-Only Options Most training-only options are documented in the argument parser of the MWT expander.
https://stanfordnlp.github.io/stanza/mwt.html
CC-MAIN-2021-25
refinedweb
479
53.81
> Hi dudes, I am designing a game with 3 Levels for example if we have 10 enemies in our first level and once its completed we have to switch to the next level and so on. Here my doubt is in coding part once i complete my task by destroying my enemies in Level 1 how can jump to next level in this also i can load my next level by using "Application.LoadLevel("name of scene")" but the thing is how to identify that my enemies are destroyed and to loaded the next level automatically Please help me guys I'm making a breakout game and I don't know how to like get it to go straight to the next lvl after I have finish the one I'm on.( by any chance could I get a step by step solution with scripts plz). Answer by robertmathew · Apr 01, 2011 at 07:41 AM // Loads the level with index 0 Application.LoadLevel (0); // Load the level named "HighScore". Application.LoadLevel ("HighScore"); Application.LoadLevel (0); // Load the level named "HighScore". Application.LoadLevel ("HighScore"); This function loads level by its index. You can see the indices of all levels using the File->Build Settings... menu in Unity. Before you can load a level you have to add it to the list of levels used in the game. Use File->Build Settings... in Unity and add the levels you need to the level list there. Thank you Buddy ! My doubt is that how to detect that my enemies are destroyed so that once its detected as my enemies in level 1 are completely destroyed it has to load next level Answer by sriram90 · Apr 01, 2011 at 07:33 AM hey, Do you have all those levels in different scenes. if you have it as different scenes means first add all those scenes when you build the project. when you hit build it shows a button name as ADD CURRENT.first you just hit it and add all those levels of your project.after you add this it'll appear like this, Scene1 Name - 0 Scene2 Name - 1 Scene3 Name - 2 you'll get like this,if you want to move 2nd scene after killing those enemies means just use Application.LoadLevel(1), and move 3rd scene means use Application.LoadLevel(2). see this link if you're not using different scene you cant use Application.LoadLevel. Simple and enjoy. Thank you Buddy ! My doubt is that how to detect that my enemies are destroyed so that once its detected it has to load next level just use counts....consider you're having 10 enemies when game get start...decrease that count when enemies destroy...finally if you got count 0 means move to next level...thats it...easy to make the code... Answer by e.bonneville · Apr 01, 2011 at 11:05 AM Well, in order to detect when all the enemies are destroyed (in order to load the next level), you'll need to keep some kind of counter to keep track of how many enemies are left. That way, when the counter reaches zero, you know you have no enemies left, and you can advance to the next level. I recently gave an answer to a very similar question here. Since the script is almost exactly what you need, just replace "cannon" with "enemy" and you should be good to go. Oh, and you'll also need a way to keep track of what level you're on, so you that you can advance to the next one. Thank you Elliot Let me try with this and if i have any doubt i will get back to you . Answer by patrick · Aug 31, 2013 at 01:23 PM function Start() { application.LoadLevel("levelName");// switches to this level name on start // OR you can use this Application.LoadLevel (0);// which loads the scein in the index order } Answer by Calum1015 · May 02, 2015 at 06:11 AM This is a very old question, but I will answer anyway's to help anyone struggling to understand as of yet. Here are a list of steps. Add a few variables to your script(in this case a Boolean, a Float and a second Float). Use a Boolean value to detect if all the enemies are killed in a level. Use an if statement to find out if all the enemies are killed using one of the float values.(this will probably make more sense in the code provided) Load a level if all enemies are killed. Repeat Steps 2-4 in level 2 and 3. With that second float I told you to add your going to create a list of enemies killed. Script is in C#. using UnityEngine; using System.Collections; public class switchLevel : Monobehavior { public bool allEnemiesKilled; public float enemiesInLevel = 10.0f; public float enemiesKilled; void Awake () { DontDestroyOnLoad(this); //Ensures we don't lose our variables, playerPrefs work good for this too } void Update () { if(enemiesKilled < enemiesInLevel ) //if enemies kill is greater than or equal to the amount of enemiesInLevel { allEnemiesKilled = true; //allEnemiesKilled is equal to true } if(allEnemiesKilled = true) //if allEnemiesKilled is equal to true, then load the level specified { Application.LoadLevel("yourSecondLevel help creating some coding so that it loads the next level (2 Players) 0 Answers Unity 5.0.2 - How can I know when a scene is loaded AND when it's running well? 0 Answers Resources.Load taking very long on Android 0 Answers Unity Editor Not Starting/Not Responding 1 Answer Async Level loading issue. 0 Answers
https://answers.unity.com/questions/55787/how-to-switch-to-next-level-after-completeing-the.html
CC-MAIN-2019-22
refinedweb
934
71.95
Introduction to Windows Presentation Foundation WPF as the New GUI Before we dive into WPF proper, it is interesting to consider where we're coming from. User32, à la Charles Petzold Anyone programming to User32 has, at some point, read one of Petzold's "Programming Windows" books. They all start with an example something like this: #include <windows.h> LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wparam, LPARAM lparam); INT WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR cmdline, int cmdshow) { MSG msg; HWND hwnd; WNDCLASSEX wndclass = { 0 }; wndclass.cbSize = sizeof(WNDCLASSEX); wndclass.style = CS_HREDRAW | CS_VREDRAW; wndclass.lpfnWndProc = WndProc; wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION); wndclass.hCursor = LoadCursor(NULL, IDC_ARROW); wndclass.hbrBackground = (HBRUSH)GetStockObject(WHITE_BRUSH); wndclass.lpszClassName = TEXT("Window1"); wndclass.hInstance = hInstance; wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION); RegisterClassEx(&wndclass); hwnd = CreateWindow(TEXT("Window1"), TEXT("Hello World"), WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, 0, CW_USEDEFAULT, 0, NULL, NULL, hInstance, NULL); if( !hwnd ) return 0; ShowWindow(hwnd, SW_SHOWNORMAL); UpdateWindow(hwnd); while( GetMessage(&msg, NULL, 0, 0) ) { TranslateMessage(&msg); DispatchMessage(&msg); } return msg.wParam; }; } This is "Hello World" when talking to User32. There are some very interesting things going on here. A specialized type (Window1) is first defined by the calling of RegisterClassEx, then instantiated (CreateWindow) and displayed (ShowWindow). Finally, a message loop is run to let the window receive user input and events from the system (GetMessage, TranslateMessage, and DispatchMessage). This program is largely unchanged from the original introduction of User back in the Windows 1.0 days. Windows Forms took this complex programming model and produced a clean managed object model on top of the system, making it far simpler to program. Hello World can be written in Windows Forms with ten lines of code: using System.Windows.Forms; using System; class Program { [STAThread] static void Main() { Form f = new Form(); f.Text = "Hello World"; Application.Run(f); } } A primary goal of WPF is to preserve as much developer knowledge as possible. Even though WPF is a new presentation system completely different from Windows Forms, we can write the equivalent program in WPF with very similar code1 (changes are in boldface): using System.Windows; using System; class Program { [STAThread] static void Main() { Window f = new Window(); f.Title = "Hello World"; new Application().Run(f); } } In both cases the call to Run on the Application object is the replacement for the message loop, and the standard CLR (Common Language Runtime) type system is used for defining instances and types. Windows Forms is really a managed layer on top of User32, and it is therefore limited to only the fundamental features that User32 provides. User32 is a great 2D widget platform. It is based on an on-demand, clip-based painting system; that is, when a widget needs to be displayed, the system calls back to the user code (on demand) to paint within a bounding box that it protects (with clipping). The great thing about clip-based painting systems is that they're fast; no memory is wasted on buffering the content of a widget, nor are any cycles wasted on painting anything but the widget that has been changed. The downsides of on-demand, clip-based painting systems relate mainly to responsiveness and composition. In the first case, because the system has to call back to user code to paint anything, often one component may prevent other components from painting. This problem is evident in Windows when an application hangs and goes white, or stops painting correctly. In the second case, it is extremely difficult to have a single pixel affected by two components, yet that capability is desirable in many scenarios—for example, partial opacity, anti-aliasing, and shadows. With overlapping Windows Forms controls, the downsides of this system become clear (Figure 1.1). When the controls overlap, the system needs to clip each one. Notice the gray area around the word linkLabel1 in Figure 1.1. Figure 1.1 Windows Forms controls overlapping. Notice that each control obscures the others. WPF is based on a retained-mode composition system. For each component a list of drawing instructions is maintained, allowing for the system to automatically render the contents of any widget without interacting with user code. In addition, the system is implemented with a painter's algorithm, which ensures that overlapping widgets are painted from back to front, allowing them to paint on top of each other. This model lets the system manage the graphics resource, in much the same way that the CLR manages memory, to achieve some great effects. The system can perform high-speed animations, send drawing instructions to another machine, or even project the display onto 3D surfaces—all without the widget being aware of the complexity. To see these effects, compare Figures 1.1 and 1.2. In Figure 1.2 the opacity on all the WPF controls is set so that they're partially transparent, even to the background image. Figure 1.2 WPF controls overlapping, with opacity set to semitransparency. Notice that all the controls compositing together are visible, including the background image. WPF's composition system is, at its heart, a vector-based system, meaning that all painting is done through a series of lines. Figure 1.3 shows how vector graphics compare to traditional raster graphics. Figure 1.3 Comparing vector and raster graphics. Notice that zooming in on a vector graphic does not reduce its crispness. The system also supports complete transform models, with scale, rotation, and skew. As Figure 1.4 shows, any transformation can be applied to any control, producing bizarre effects even while keeping the controls live and usable. Figure 1.4 WPF controls with a variety of transformations applied. Despite the transformations, these controls remain fully functional. Note that when User32 and GDI32 were developed, there was really no notion of container nesting. The design principle was that a flat list of children existed under a single parent window. The concept worked well for the simple dialogs of the 1990s, but today's complex user interfaces require nesting. The simplest example of this problem is the GroupBox control. In the User32 design, GroupBox is behind controls but doesn't contain them. Windows Forms does support nesting, but that feature has revealed many problems with the underlying User32 model of control. In WPF's composition engine, all controls are contained, grouped, and composited. A button in WPF is actually made up of several smaller controls. This move to embrace composition, coupled with a vector-based approach, enables any level of containment (Figure 1.5). Figure 1.5 WPF controls are built out of composition and containment. The button shown here contains both text and an image. To really see the power of this composition, examine Figure 1.6. At the maximum zoom shown, the entire circle represents less than a pixel on the original button. The button actually contains a vector image that contains a complete text document that contains a button that contains another image. Figure 1.6 The power of composition, as revealed by zooming in on the composite button shown in Figure 1.5 In addition to addressing the limitations of User32 and GDI32, one of WPF's goals was to bring many of the best features from the Web programming model to Windows developers. HTML, a.k.a. the Web One of the biggest assets of Web development is a simple entry to creating content. The most basic HTML "program" is really nothing more than a few HTML tags in a text file: <html> <head> <title>Hello World</title> </head> <body> <p>Welcome to my document!</p> </body> </html> In fact, all of these tags can be omitted, and we can simply create a file with the text "Welcome to my document!", name it <something>.html, and view it in a browser (Figure 1.7). This amazingly low barrier to entry has made developers out of millions of people who never thought they could program anything. Figure 1.7 Displaying a simple HTML document in Internet Explorer In WPF we can accomplish the same thing using a new markup format called XAML (Extensible Application Markup Language), pronounced "zammel." Because XAML is a dialect of XML, it requires a slightly stricter syntax. Probably the most obvious requirement is that the xmlns directive must be used to associate the namespace with each tag: <FlowDocument xmlns=''> <Paragraph>Welcome to my document!</Paragraph> </FlowDocument> You can view the file by double-clicking <something>.xaml (Figure 1.8). Figure 1.8 Displaying a WPF document in Internet Explorer Of course, we can leverage all the power of WPF in this simple markup. We can trivially implement the button display from Figure 1.5 using markup, and display it in the browser (Figure 1.9). Figure 1.9 Displaying a WPF document in Internet Explorer using controls and layout from WPF One of the big limitations of the HTML model is that it really only works for creating applications that are hosted in the browser. With XAML markup, either we can use it in a loose markup format and host it in the browser, as we have just seen, or we can compile it into an application and create a standard Windows application using markup (Figure 1.10): Figure 1.10 Running an application authored in XAML. The program can be run in a top-level window or hosted in a browser. <Window xmlns='' Title='Hello World!'> <Button>Hello World!</Button> </Window> Programming capability in HTML comes in three flavors: declarative, scripting, and server-side. Declarative programming is something that many people don't think of as programming. We can define behavior in HTML with simple markup tags like <form /> that let us perform actions (generally posting data back to the server). Script programming lets us use JavaScript to program against the HTML Document Object Model (DOM). Script programming is becoming much more fashionable because now enough browsers have support for a common scripting model to make scripts run everywhere. Server-side programming lets us write logic on the server that interacts with the user (in the Microsoft platform, that means ASP.NET programming). ASP.NET provides a very nice way to generate HTML content. Using repeaters, data binding, and event handlers, we can write simple server-side code to create simple applications. One of the more trivial examples is simple markup injection: <%@ Page %> <html> <body> <p><%=DateTime.Now().ToString()%></p> </body> </html> The real power of ASP.NET comes in the rich library of server controls and services. Using a single control like DataGrid, we can generate reams of HTML content; and with services like membership we can create Web sites with authentication easily. The big limitation of this model is the requirement to be online. Modern applications are expected to run offline or in occasionally connected scenarios. WPF takes many of the features from ASP.NET—repeaters and data binding, for example—and gives them to Windows developers with the additional ability to run offline. One of the primary objectives of WPF was to bring together the best features of both Windows development and the Web model. Before we look at the features of WPF, it is important to understand the new programming model in the .NET Framework 3.0: XAML.
https://www.informit.com/articles/article.aspx?p=713190
CC-MAIN-2021-39
refinedweb
1,870
56.15
Hello friends, welcome back to my blog. Today in this blog post, I am going to tell you, How to fetch and show api json data in reactjs application? For reactjs new comers, please check the below link: Friends now I proceed onwards and here is the working code snippet for How to fetch and show api json data in reactjs application? and please use this carefully to avoid the mistakes: 1. Firstly friends we need fresh reactjs setup and for that we need to run below commands into our terminal and also we should have latest node version installed on our system: npx create-react-app reactapidata cd reactapidata npm start // run the project 2. Now we need to run below commands to get axios modules into our react js app: npm install axios --save npm start 3. Finally friends we need to add below code into our src/App.js file to get final output on web browser: import React from 'react'; import axios from 'axios'; class App extends React.Component { constructor(props) { super(props) this.state = { data: [] } } componentDidMount(){ //Get all users details in bootstrap table axios.get('APIURL').then(res => { //Storing users detail in state array object this.setState({data: res.data}); }); } render() { return ( <div className="maincontainer"> <table> <thead> <tr> <th>ID</th> <th>Username</th> </tr> </thead> <tbody> {this.state.data.map((result) => { return ( <tr> <td>{result.id}</td> <td>{result.name}</td> </tr> ) })} </tbody> </table> </div> ) }; } export default Home; Now we are done friends. If you have any kind of query or suggestion or any requirement then feel free to comment below.
https://therichpost.com/how-to-fetch-and-show-api-json-data-in-reactjs-application/
CC-MAIN-2022-05
refinedweb
267
55.44
wait, waitpid, waitid - wait for process to change state Synopsis Description Errors Notes Linux Notes Example Program source Colophon #include <sys/types.h> #include <sys/wait.h> pid_t wait(int *status); pid_t waitpid(pid_t pid, int *status, int options); int waitid(idtype_t idtype, id_t id ", siginfo_t *" infop ", int " options ); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): waitid():_SVID_SOURCE || _XOPEN_SOURCE >= 500 || _XOPEN_SOURCE && _XOPEN_SOURCE_EXTENDED || /* Since glibc 2.12: */ _POSIX_C_SOURCE >= 200809L All of these system calls are used to wait for state changes in a child of the calling process, and obtain information about the child whose state has changed. A state change is considered to be: the child terminated; waitid() system call (available since Linux 2.6.9) provides more precise control over which child state changes to wait for. The idtype and id arguments select the child(ren) to wait for, as follows:The child state changes to wait for are specified by ORing one or more of the following flags in(): on success, returns the process ID of the terminated child; on error, -1 is returned.. SVr4, 4.3BSD, POSIX.1-2001. A child that terminates, but has not been waited for becomes a "zombie". The kernel maintains a minimal set of information about the zombie(8),. In the Linux kernel, a kernel-scheduled thread is not a distinct construct immediately, using the integer supplied on the command line as the exit status.); } } .
http://manpages.sgvulcan.com/waitpid.2.php
CC-MAIN-2017-09
refinedweb
235
58.01
Part 1 of this series looked at the basics of scripting Kernel-based Virtual Machine (KVM) using libvirt and Python. This installment uses the concepts developed there to build several utility applications and add a graphical user interface (GUI) into the mix. There are two primary options for a GUI toolkit that has Python bindings and is cross-platform. The first is Qt, which is now owned by Nokia; the second is wxPython. Both have strong followings and many open source projects on their lists of users. For this article, I focus on wxPython more out of personal preference than anything else. I start off with a short introduction to wxPython and the basics proper setup. From there, I move on to a few short example programs, and then to integrating with libvirt. This approach should introduce enough wxPython basics for you to build a simple program, and then expand on that program to add features. Hopefully, you'll be able to take these concepts and build on them to meet your specific needs. wxPython basics A good place to start is with a few basic definitions. The wxPython library is actually a wrapper on top of the C++-based wxWidgets. In the context of creating a GUI, a widget is essentially a building block. Five independent widgets reside at the top-most level of the widget hierarchy: wx.Frame, wx.Dialog, wx.PopupWindow, wx.MDIParentFrame, and wx.MDIChildFrame. Most of the examples here are based on wx.Frame, as it essentially implements a single modal window. In wxPython, Frame is a class that you instantiate as is or inherit from to add or enhance the functionality. It's important to understand how widgets appear within a frame so you know how to place them properly. Layout is determined either by absolute positioning or by using sizers. A sizer is a handy tool that resizes widgets when the user changes the size of the window by clicking and dragging a side or corner. The simplest form of a wxPython program must have a few lines of code to set things up. A typical main routine might look something like Listing 1. Listing 1. Device XML definition if __name__ == "__main__": app = wx.App(False) frame = MyFrame() frame.Show() app.MainLoop() Every wxPython app is an instance of wx.App() and must instantiate it as shown in Listing 1. When you pass False to wx.App, it means "don't redirect stdout and stderr to a window." The next line creates a frame by instantiating the MyFrame() class. You then show the frame and pass control to app.MainLoop(). The MyFrame() class typically contains an __init__ function to initialize the frame with your widgets of choice. It is also where you would connect any widget events to their appropriate handlers. This is probably a good place to mention a handy debugging tool that comes with wxPython. It's called the widget inspection tool (see Figure 1) and only requires two lines of code to use. First, you have to import it with: import wx.lib.inspection Then, to use it, you simply call the Show() function: wx.lib.inspectin.InspectionTool().Show() Clicking the Events icon on the menu toolbar dynamically shows you events as they fire. It's a really neat way to see events as they happen if you're not sure which events a particular widget supports. It also gives you a better appreciation of how much is going on behind the scenes when your application is running. Figure 1. The wxPython widget inspection tool Add a GUI to a command-line tool Part 1 of this series presented a simple tool to display the status of all running virtual machines (VMs). It's simple to change that tool into a GUI tool with wxPython. The wx.ListCtrl widget provides just the functionality you need to present the information in tabular form. To use a wx.ListCtrl widget, you must add it to your frame with the following syntax: self.list=wx.ListCtrl(frame,id,style=wx.LC_REPORT|wx.SUNKEN_BORDER) You can choose from several different styles, including the wx.LC_REPORT and wx.SUNKEN_BORDER options previously used. The first option puts the wx.ListCtrl into Report mode, which is one of four available modes. The others are Icon, Small Icon, and List. To add styles like wx.SUNKEN_BORDER, you simply use the pipe character ( |). Some styles are mutually exclusive, such as the different border styles, so check the wxPython wiki if you have any doubts (see Resources). After instantiating the wx.ListCtrl widget, you can start adding things to it, like column headers. The InsertColumn method has two mandatory parameters and two optional ones. First is the column index, which is zero-based, followed by a string to set the heading. The third is for formatting and should be something like LIST_FORMAT_CENTER, _LEFT, or _RIGHT. Finally, you can set a fixed width by passing in an integer or have the column automatically sized by using wx.LIST_AUTOSIZE. Now that you have the wx.ListCtrl widget configured, you can use the InsertStringItem and SetStringItem methods to populate it with data. Each new row in the wx.ListCtrl widget must be added using the InsertStringItem method. The two mandatory parameters specify where to perform the insert, with a value of 0 indicating at the top of the list and the string to insert at that location. InsertStringItem returns an integer indicating the row number of the inserted string. You can make a call to GetItemCount() for the list and use the return value for the index to append to the bottom, as Listing 2 shows. Listing 2. GUI version of the command-line tool import wx import libvirt conn=libvirt.open("qemu:///system") class MyApp(wx.App): def OnInit(self): frame = wx.Frame(None, -1, "KVM Info") id=wx.NewId() self.list=wx.ListCtrl(frame,id,style=wx.LC_REPORT|wx.SUNKEN_BORDER) self.list.Show(True) self.list.InsertColumn(0,"ID") self.list.InsertColumn(1,"Name") self.list.InsertColumn(2,"State") self.list.InsertColumn(3,"Max Mem") self.list.InsertColumn(4,"# of vCPUs") self.list.InsertColumn(5,"CPU Time (ns)") for i,id in enumerate(conn.listDomainsID()): dom = conn.lookupByID(id) infos = dom.info() pos = self.list.InsertStringItem(i,str(id)) self.list.SetStringItem(pos,1,dom.name()) self.list.SetStringItem(pos,2,str(infos[0])) self.list.SetStringItem(pos,3,str(infos[1])) self.list.SetStringItem(pos,4,str(infos[3])) self.list.SetStringItem(pos,5,str(infos[2])) frame.Show(True) self.SetTopWindow(frame) return True app = MyApp(0) app.MainLoop() Figure 2 shows the results of these efforts. Figure 2. The GUI KVM info tool You can enhance the appearance of this table. A noticeable improvement would be to resize the columns. You can do so by either adding the width = parameter to the InsertColumn call or use one line of code, like this: self.ListCtrl.SetColumnWidth(column,wx.LIST_AUTOSIZE) The other thing you could do is add a sizer so that the controls resize with the parent window. You can do this with a wxBoxSizer in a few lines of code. First, you create the sizer, and then you add the widgets to it that you want to resize along with the main window. Here's what that code might look like: self.sizer = wx.BoxSizer(wx.VERTICAL) self.sizer.Add(self.list, proportion=1,flag=wx.EXPAND | wx.ALL, border=5) self.sizer.Add(self.button, flag=wx.EXPAND | wx.ALL, border=5) self.panel.SetSizerAndFit(self.sizer) The last call to self.panel.SetSizerAndFit() instructs wxPython to set the initial size of the pane based on the sizer's minimum size from the embedded widgets. This helps to give your initial screen a reasonable size based on the content inside. Control flow based on a user action One of the nice things about the wx.ListCtrl widget is that you can detect when a user clicks a specific part of the widget and take some action based on that. This functionality allows you to do things like sort a column alphabetically in forward or reverse order based on the user clicking the column title. The technique to accomplish this uses a callback mechanism. You must provide a function to handle each action that you want to process by binding the widget and processing method together. You do so with the Bind method. Every widget has some number of events associated with it. There are also events associated with things like the mouse. Mouse events have names like EVT_LEFT_DOWN, EVT_LEFT_UP, and EVT_LEFT_DCLICK, along with the same naming convention for the other buttons. You could handle all mouse events by attaching to the EVT_MOUSE_EVENTS type. The trick is to catch the event in the context of the application or window you're interested in. When control passes to the event handler, it must perform the necessary steps to handle the action, and then return control to wherever it was prior to that. This is the event-drive programming model that every GUI must implement to handle user actions in a timely fashion. Many modern GUI applications implement multithreading to keep from giving the user the impression that the program isn't responding. I briefly touch on that later in this article. Timers represent another type of event that a program must potentially deal with. For example, you might want to perform a periodic monitoring function at a user-defined interval. You would need to provide a screen on which the user could specify the interval, and then launch a timer that would in turn fire an event when it expires. The timer expiration fires an event that you can use to activate a section of code. You might need to set or restart the time, depending again on user preference. You could easily use this technique to develop a VM monitoring tool. Listing 3 provides a simple demo app with a button and static text lines. Using wx.StaticText is an easy way to output a string to the window. The idea is to click the button once to start a timer and record the start time. Clicking the button records the start time and changes the label to Stop. Clicking the button again fills in the stop time text box and changes the button back to Start. Listing 3. Simple app with a button and static text import wx from time import gmtime, strftime class MyForm(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, wx.ID_ANY, "Buttons") self.panel = wx.Panel(self, wx.ID_ANY) self.button = wx.Button(self.panel, id=wx.ID_ANY, label="Start") self.button.Bind(wx.EVT_BUTTON, self.onButton) def onButton(self, event): if self.button.GetLabel() == "Start": self.button.SetLabel("Stop") strtime = strftime("%Y-%m-%d %H:%M:%S", gmtime()) wx.StaticText(self, -1, 'Start Time = ' + strtime, (25, 75)) else: self.button.SetLabel("Start") stptime = strftime("%Y-%m-%d %H:%M:%S", gmtime()) wx.StaticText(self, -1, 'Stop Time = ' + stptime, (25, 100)) if __name__ == "__main__": app = wx.App(False) frame = MyForm() frame.Show() app.MainLoop() Enhanced monitoring GUI Now, you can add functionality to the simple monitoring GUI introduced earlier. There is one more piece of wxPython you need to understand before you have everything you need to create your app. Adding a check box to the first column of a wx.ListCtrl widget would make it possible to take action on multiple lines based on the status of the check box. You can do this by using what wxPython calls mixins. In essence, a mixin is a helper class that adds some type of functionality to the parent widget. To add the check box mixin, simply use the following code to instantiate it: listmix.CheckListCtrlMixin.__init__(self) You can also take advantage of events to add the ability to select or clear all boxes by clicking the column title. Doing so makes it simple to do things like start or stop all VMs with just a few clicks. You need to write a few event handlers to respond to the appropriate events in the same way you changed the label on the button previously. Here's the line of code needed to set up a handler for the column click event: self.Bind(wx.EVT_LIST_COL_CLICK, self.OnColClick, self.list) wx.EVT_LIST_COL_CLICK fires when any column header is clicked. To determine which column was clicked, you can use the event.GetColumn() method. Here's a simple handler function for the OnColClick event: def OnColClick(self, event): print "column clicked %d\n" % event.GetColumn() event.Skip() The event.Skip() call is important if you need to propagate the event to other handlers. Although this need might not be apparent in this instance, it can be problematic when multiple handlers need to process the same event. There's a good discussion of event propagation on the wxPython wiki site, which has much more detail than I have room for here. Finally, add code to the two button handlers to start or stop all checked VMs. It's possible to iterate over the lines in your wx.ListCtrl and pull the VM ID out with just a few lines of code, as Listing 4 shows. Listing 4. Starting and stopping checked VMs #!/usr/bin/env python import wx import wx.lib.mixins.listctrl as listmix import libvirt conn=libvirt.open("qemu:///system") class CheckListCtrl(wx.ListCtrl, listmix.CheckListCtrlMixin, listmix.ListCtrlAutoWidthMixin): def __init__(self, *args, **kwargs): wx.ListCtrl.__init__(self, *args, **kwargs) listmix.CheckListCtrlMixin.__init__(self) listmix.ListCtrlAutoWidthMixin.__init__(self) self.setResizeColumn(2) class MainWindow(wx.Frame): def __init__(self, *args, **kwargs): wx.Frame.__init__(self, *args, **kwargs) self.panel = wx.Panel(self) self.list = CheckListCtrl(self.panel, style=wx.LC_REPORT) self.list.InsertColumn(0, "Check", width = 175) self.Bind(wx.EVT_LIST_COL_CLICK, self.OnColClick, self.list) self.list.InsertColumn(1,"Max Mem", width = 100) self.list.InsertColumn(2,"# of vCPUs", width = 100) for i,id in enumerate(conn.listDefinedDomains()): dom = conn.lookupByName(id) infos = dom.info() pos = self.list.InsertStringItem(1,dom.name()) self.list.SetStringItem(pos,1,str(infos[1])) self.list.SetStringItem(pos,2,str(infos[3])) self.StrButton = wx.Button(self.panel, label="Start") self.Bind(wx.EVT_BUTTON, self.onStrButton, self.StrButton) self.sizer = wx.BoxSizer(wx.VERTICAL) self.sizer.Add(self.list, proportion=1, flag=wx.EXPAND | wx.ALL, border=5) self.sizer.Add(self.StrButton, flag=wx.EXPAND | wx.ALL, border=5) self.panel.SetSizerAndFit(self.sizer) self.Show() def onStrButton(self, event): if self.StrButton.GetLabel() == "Start": num = self.list.GetItemCount() for i in range(num): if self.list.IsChecked(i): dom = conn.lookupByName(self.list.GetItem(i, 0).Text) dom.create() print "%d started" % dom.ID() def OnColClick(self, event): item = self.list.GetColumn(0) if item is not None: if item.GetText() == "Check": item.SetText("Uncheck") self.list.SetColumn(0, item) num = self.list.GetItemCount() for i in range(num): self.list.CheckItem(i,True) else: item.SetText("Check") self.list.SetColumn(0, item) num = self.list.GetItemCount() for i in range(num): self.list.CheckItem(i,False) event.Skip() app = wx.App(False) win = MainWindow(None) app.MainLoop() There are two things to point out here with respect to the state of VMs in KVM: Running VMs show up when you use the listDomainsID() method from libvirt. To see non-running machines you must use listDefinedDomains(). You just have to keep those two separate so that you know which VMs you can start and which you can stop. Wrapping up This article focused mainly on the steps needed to build a GUI wrapper using wxPython that in turn manages KVM with libvirt. The wxPython library is extensive and provides a wide range of widgets to enable you to build professional-looking GUI-based applications. This article just scratched the surface of its capabilities, but you'll hopefully be motivated to investigate further. Be sure to check more Resources to help get your application running. Resources Learn libvirtwebsite: Check out the entire site for more information. - Reference Manual for libvirt: Access the complete libvirtAPI reference manual. - Python.org: Find more of the Python resources you need. - wxPython.org: Get more about wxPython. - wxPython wiki: Expand your knowledge through the many tutorials found here. - developerWorks Open source zone: Find extensive how-to information, tools, and project updates to help you develop with open source technologies and use them with IBM products. Explore more Python-related articles. -.
http://www.ibm.com/developerworks/library/os-python-kvm-scripting2/
CC-MAIN-2014-52
refinedweb
2,747
59.6
IRC log of prov on 2011-10-13 Timestamps are in UTC. 14:35:29 [RRSAgent] RRSAgent has joined #prov 14:35:29 [RRSAgent] logging to 14:35:31 [trackbot] RRSAgent, make logs world 14:35:31 [Zakim] Zakim has joined #prov 14:35:33 [trackbot] Zakim, this will be 14:35:33 [Zakim] I don't understand 'this will be', trackbot 14:35:34 [trackbot] Meeting: Provenance Working Group Teleconference 14:35:34 [trackbot] Date: 13 October 2011 14:35:35 [Luc] Zakim, this will be PROV 14:35:35 [Zakim] ok, Luc; I see SW_(PROV)11:00AM scheduled to start in 25 minutes 14:35:49 [Luc] Agenda: 14:36:02 [Luc] Chair: Luc Moreau 14:36:10 [Luc] Scribe: Daniel Garijo 14:36:18 [Luc] rrsagent, make logs public 14:36:22 [Luc] Topic: Admin 14:44:19 [dgarijo] dgarijo has joined #prov 14:46:52 [SamCoppens] SamCoppens has joined #prov 14:47:59 [Luc] @dgarijo, hi daniel, everything is set up for you 14:48:31 [dgarijo] Hi Luc, thanks a lot! 14:53:51 [pgroth] pgroth has joined #prov 14:55:12 [pgroth] pgroth has joined #prov 14:56:05 [satya] satya has joined #prov 14:56:06 [pgroth] pgroth has joined #prov 14:56:10 [khalidbelhajjame] khalidbelhajjame has joined #prov 14:57:10 [Zakim] SW_(PROV)11:00AM has now started 14:57:17 [Zakim] +??P4 14:57:20 [Zakim] +Luc 14:57:23 [Zakim] +??P6 14:57:29 [pgroth_] pgroth_ has joined #prov 14:57:30 [khalidbelhajjame] zakim, ??P4 is me 14:57:30 [Zakim] +khalidbelhajjame; got it 14:57:42 [dgarijo] Zakim, ??P6 is me 14:57:42 [Zakim] +dgarijo; got it 14:58:18 [Zakim] +[IPcaller] 14:58:22 [Curt] Curt has joined #prov 14:58:31 [pgroth_] Zkaim, +[IPCaller] is me 14:58:32 [smiles] smiles has joined #prov 14:58:47 [pgroth_] Zakim, [IPCaller] is me 14:58:47 [Zakim] +pgroth_; got it 14:58:48 [Zakim] +stain 14:59:04 [stain] finally recognized by the conference bridge! 14:59:12 [Zakim] +Curt_Tilmes 14:59:40 [Zakim] +[IPcaller] 14:59:53 [SamCoppens] SamCoppens has joined #prov 15:00:16 [Zakim] +bringert 15:00:21 [stain] 15:00:24 [Paolo] Paolo has joined #prov 15:00:54 [Zakim] + +329331aaaa 15:01:00 [Zakim] + +1.509.375.aabb 15:01:02 [Zakim] +??P38 15:01:11 [Paolo] zakim ??P38 is me 15:01:20 [Luc] zakim, who is here? 15:01:24 [Zakim] On the phone I see khalidbelhajjame, Luc, dgarijo, pgroth_, stain, Curt_Tilmes, [IPcaller], bringert, +329331aaaa, +1.509.375.aabb, ??P38 15:01:27 [Zakim] On IRC I see Paolo, SamCoppens, smiles, Curt, pgroth_, khalidbelhajjame, satya, dgarijo, Zakim, RRSAgent, Luc, MacTed, stain, trackbot, sandro 15:01:27 [SamCoppens] zakim, +329331aaaa is me 15:01:36 [Zakim] +SamCoppens; got it 15:01:52 [satya] zakim, [IPCaller] is me 15:02:03 [Zakim] +satya; got it 15:02:03 [dgarijo] Luc: look at the name of the formal model 15:02:31 [dgarijo] ... review of the agenda. Any other issues? 15:02:45 [Luc] PROPOSED: to accept the minutes of Oct 06 telecon 15:02:48 [satya] +1 15:02:49 [khalidbelhajjame] +1 15:02:50 [dgarijo] +1 15:02:50 [Curt] +1 15:02:51 [SamCoppens] +1 15:02:56 [stain] +1 15:02:56 [Paolo] +1 15:03:24 [Luc] ACCEPTED: the minutes of Oct 06 telecon 15:03:45 [dgarijo] Luc: no actions to review in the tracker. 15:03:54 [dgarijo] ... scribes still needed. 15:03:58 [Luc] TOPIC: Name for 'ex-formal model' document 15:04:23 [dgarijo] ... formal name was confusing 15:04:42 [stain] q+ 15:04:47 [dgarijo] ... what are the proposals of the ontology? 15:05:01 [Luc] q? 15:05:07 [JimMcCusker] JimMcCusker has joined #prov 15:05:09 [stain] PROV Ontology (PROV-O) 15:05:11 [Zakim] +??P11 15:05:12 [dgarijo] stain: prov-ontology 15:05:12 [khalidbelhajjame] Prov-O 15:05:25 [Yogesh] Yogesh has joined #prov 15:05:26 [Luc] ack stain 15:05:34 [JimMcCusker] Zakim, ??P11 is JimMcCusker 15:05:40 [pgroth_] +q 15:05:42 [stain] lists current proposals 15:05:43 [dgarijo] Luc: any counterproposal? 15:05:51 [Zakim] +JimMcCusker; got it 15:05:55 [Zakim] -khalidbelhajjame 15:06:07 [Luc] ack pgr 15:06:23 [dgarijo] paul: is prov-o general enough? (there could be other serializations not in owl). 15:06:25 [Zakim] +[IPcaller] 15:06:39 [Zakim] +Yogesh_Simmhan 15:06:46 [dgarijo] ... for instance if we use riff. 15:06:52 [satya] @Paul: yes, I think that is a good point - hence we did not include OWL or OWL2 in name 15:07:02 [dgarijo] stian: should cover any technology in the future as well. 15:07:06 [Luc] q? 15:07:14 [ericstephan] ericstephan has joined #prov 15:08:02 [Luc] q? 15:08:06 [pgroth_] ok seems fine to me 15:08:11 [dgarijo] satya: ontology should be enough to adress the technology issues. 15:08:12 [stain] @Satya +1 (sound is not too bad, btw) 15:08:14 [khalidbelhajjame] For other technologies to which we want to map the provennce model, we can use PROV-ASN 15:08:24 [Luc] does look slightly strange.. is it 0 or O? 15:08:50 [dgarijo] Luc: comments: is ti 0 (zero) or O ? 15:08:55 [stain] or for the latest version 15:08:58 [dgarijo] s/ti/it 15:09:19 [dgarijo] Luc: at some point we'll have to tackle named graphs. 15:09:28 [stain] Stian: Not particularly bothered by o/0 - just esthetics. 15:09:30 [pgroth_] +q 15:09:35 [dgarijo] Luc: is it still right to talk about provenance ontology in that case? 15:10:03 [dgarijo] paul: by using "ontology" we could include everything under that 15:10:06 [Luc] ack pgr 15:10:07 [Zakim] -Yogesh_Simmhan 15:10:15 [Luc] sandro ?? 15:10:48 [Luc] q? 15:11:05 [Paolo] prov-OM? 15:11:06 [dgarijo] Luc: if we decide this name, will we be able to change it in the future? 15:11:26 [dgarijo] ... prov-ONTO was also prpoposed 15:11:42 [pgroth_] +q 15:11:42 [dgarijo] stian: an ontology is already a model. 15:12:03 [dgarijo] stain: maybe the name should be similar to the paq document. 15:12:05 [Zakim] -[IPcaller] 15:12:13 [satya] @Paolo PAM :) 15:12:49 [stain] @dgarijo - no, I meant that access-and-query is not a model, perhaps an architecture :) I meant that all 3 names won't match up with "model" 15:12:54 [Luc] q? 15:12:57 [Luc] ack pg 15:12:58 [Zakim] +[IPcaller] 15:13:03 [dgarijo] paul: people on the mailing list say provenance ontology. prov-o captures it well. Maybe prov-sw, but is not a big issue 15:13:05 [Luc] PROPOSED: to adopt 'prov-o' as the short name for the PROV Ontology 15:13:12 [satya] +1 15:13:13 [JimMcCusker] +1 15:13:14 [dgarijo] +1 15:13:17 [Curt] +1 15:13:19 [khalidbelhajjame] +1 15:13:19 [SamCoppens] +1 15:13:20 [Paolo] +1 15:13:22 [stain] +1 15:13:22 [ericstephan] +1 15:13:28 [smiles] +1 15:13:35 [Luc] ACCEPTED: to adopt 'prov-o' as the short name for the PROV Ontology 15:13:49 [pgroth_] +q 15:13:55 [Luc] ack pg 15:14:01 [dgarijo] paul: how do we say it? 15:14:13 [satya] @Paul: like Bravo :) 15:14:17 [Curt] Rhymes with Bravo 15:14:26 [Luc] topic: PROV-DM FPWD 15:14:29 [satya] as Curt pointed out earlier ? 15:14:39 [dgarijo] Luc: date of release. 15:14:42 [kai] kai has joined #prov 15:14:55 [JimMcCusker] @Paul, I think that SKOS managed to avoid people knowing, I don't think it matters. I'm a skier, so I'll probably pronounce it like the city. :-) 15:14:58 [dgarijo] paolo: it is done. Have you got the url for it? 15:15:22 [Paolo] 15:15:52 [Paolo] 15:15:53 [dgarijo] ... the situation is that the document reflect the discussion on some of the issues (not all) 15:16:07 [Luc] 15:16:46 [dgarijo] ... some of these received enough discussion that has not been reflected in the document, but working on it. Other discussion (as in event), does not map directly to some of the issues. 15:17:09 [Zakim] -[IPcaller] 15:17:11 [dgarijo] ... what is going to happen next? 15:17:39 [dgarijo] Luc: try and prepare a timetable to see where we want to go in the next months 15:18:03 [Zakim] +[IPcaller] 15:18:13 [dgarijo] ... bring proposals to vote for the next telecon, in order to be able to deliver the documents to the W3C on the dates 15:18:38 [dgarijo] paolo: where is the input is going to come from? 15:18:55 [pgroth_] the paq doesn't get to say anything ;-) 15:19:11 [dgarijo] ... already been interacting with other groups. 15:19:32 [Luc] q? 15:19:34 [satya] q+ 15:19:43 [Luc] ack sat 15:20:14 [khalidbelhajjame] khalidbelhajjame has joined #prov 15:20:27 [dgarijo] satya: regarding the issues that I was trying to raise: the role of constraints. We were trying to model the constraints in the ontology. 15:20:39 [dgarijo] ... suggest set of constraints as best practices. 15:20:42 [Luc] q? 15:21:26 [dgarijo] ... how should we consider this? 15:21:43 [Luc] q? 15:21:44 [satya] ok thanks 15:21:44 [dgarijo] Luc: we should not address this rigtht now (not the right time9 15:21:56 [dgarijo] s/9/) 15:22:53 [Luc] but this important, and we need to have this debate within the WG 15:23:06 [dgarijo] paul: we have an official statement that weill go out to get responses from the people. Divulgation of the report. 15:23:24 [Luc] q? 15:23:53 [dgarijo] Luc: connetion of the task force report. 15:24:55 [dgarijo] eric: drafted some text with kai. Share some of it with Paul. 15:24:55 [khalidbelhajjame] would it be possible to share the text with everybody in the working group, just to see if ether are itmes that need to be discussed within the WG 15:25:08 [dgarijo] ... everyone is welcome to join the meeting 15:25:18 [Luc] q? 15:25:38 [kai] +1 (not on the phone today, just reading the minutes) 15:25:48 [Luc] topic: Towards prov-o document fpwd 15:25:51 [stain] is current draft 15:26:42 [dgarijo] satya: update on the document. 2 directions: 1) to meet some of the syntatic requirements (most of it done) 15:26:42 [khalidbelhajjame] Items to report from the Formal model (or ontology) group 15:26:44 [khalidbelhajjame] - Working on validating and making the html document compliant with the requirements. 15:26:46 [khalidbelhajjame] Satya identified the tasks need to be done for that purpose, and we assigned the list of tasks among ourselves. 15:26:48 [khalidbelhajjame] I believe that most of the issues has already been dealt with. 15:26:49 [khalidbelhajjame] - The OWL ontology was modified to be in line with the PROV-DM. 15:26:51 [khalidbelhajjame] - Similarily, the HTML document was updated to include classes and properties and be in line with the PROV-DM. 15:26:53 [khalidbelhajjame] - The Diagram representing the ontology was also updated. The diagrams representing the examples (Crime-File and workflow provenance) may still need to be modified to relect the new changes. 15:26:54 [khalidbelhajjame] - Satya is working on the constraints. 15:26:56 [khalidbelhajjame] - Any other thing Tim, Stian? 15:27:20 [dgarijo] stian: added namespaces and fixed images (relative paths vs external links to the wiki). 15:27:34 [dgarijo] ... did xhtml vs html4. We can do either. 15:27:36 [satya] @Khalid: nice documentation! 15:27:58 [Luc] q? 15:28:09 [dgarijo] satya: the second direction: we are aligned with the data model as much as possible. 15:28:22 [pgroth_] +q 15:28:29 [Luc] q? 15:28:33 [dgarijo] Luc: estimated release date? 15:29:07 [dgarijo] satya: last week we released one. We are continuing the process. 15:29:36 [dgarijo] ... right now can be considered as a released document. 15:30:11 [dgarijo] paul: wondering on the status on putting all the concepts of the datamodel into the ontology model 15:30:42 [khalidbelhajjame] There is a section at the end of the document that identifies the concepts (relations) that are not considered in the ontology yet 15:30:43 [Luc] and vice-versa! 15:30:43 [dgarijo] ... you could give an idea of which concepts are NOT yet defined in the ontology. 15:31:00 [dgarijo] @khalid, did we add the shortcuts too? 15:31:14 [khalidbelhajjame] @Daniel, not yet 15:31:43 [dgarijo] @khalid, then we should :) I'll give it a try afterwards. 15:31:50 [Luc] q? 15:31:53 [Luc] ack pg 15:32:00 [dgarijo] satya: will do that later 15:32:08 [khalidbelhajjame] @Daniel, yeas please go ahead, and we can discuss it in the next (Monday) telecon 15:32:38 [dgarijo] Luc: very important point. 15:32:40 [Luc] This document is part of a set of specifications aiming to define the various aspects that are necessary to achieve the visition on inter-operable interchange of provenance information in heterogeneous environments such as the Web. This document defines the PROV-DM data model for provenance, accompanied with a notation to express instances of that data model for human consumption. Two other documents, to be released shortly, are: 1) a normative serialization of PRO 15:33:12 [Paolo] visition -> vision :-) 15:33:15 [stain] : 1) a normative serialization of PROV-DM in RDF, specified by means of a mapping to the OWL2 Web Ontology Language; 2) the mechanisms for accessing and querying provenance. 15:33:16 [Luc] , specified by means of a mapping to the OWL2 Web Ontology Language; 2) the mechanisms for accessing and querying provenance. 15:33:45 [stain] @Paolo, I like visitations :) Kind of invitation-visits 15:34:14 [Luc] q? 15:34:24 [dgarijo] Luc: it should be clear to readers that all the concept on the data model document are taken in consideration in the ontology. And viceversa: if something is not on the datamodel, then it should be clear too 15:34:49 [dgarijo] satya: the owl ontology is modelling of the data model, not necesarilly a mapping. 15:35:18 [dgarijo] satya: Agrees with Luc, but some of the concepts may not be "encodable" in owl. 15:36:01 [dgarijo] ... we may not be able to map the model directly 15:36:07 [dgarijo] ... in to owl. 15:36:42 [Luc] q? 15:36:47 [dgarijo] Luc: we are going to serialize the model in RDF, that is the message sent to the semantic community. 15:37:23 [dgarijo] ... Ivan was raising is that the kind of the ontology. Is owl-dl? is it simpler? 15:37:31 [stain] I think it's OWL-Lite at the moment 15:37:50 [pgroth_] could you talk about that in the document 15:37:50 [stain] but depending on how much of the constraints we need to describe it might increase to DL 15:37:59 [dgarijo] satya: it should not be owl-full, since it is not decidable. 15:38:00 [Zakim] -[IPcaller] 15:38:23 [Luc] is it OWL2-RL? 15:38:26 [dgarijo] we haven't found anything that constraint us to anything more that owl dl 15:39:02 [dgarijo] we haven't specified any specific constraints yet 15:39:09 [Zakim] +??P4 15:39:29 [stain] it's closer to RDFS, plus owl:IrreflexiveProperty, etc 15:39:34 [satya] PROV-O has DL expressivity of ALR+ 15:39:37 [Luc] q? 15:39:38 [dgarijo] Luc: share the documents to get feedback from Sandro, Ivan and others. 15:39:42 [pgroth_] q+ 15:40:20 [Zakim] +[IPcaller] 15:40:30 [Luc] action on satya to ensure all terms of DM appear in prov-o document 15:40:30 [trackbot] Sorry, couldn't find user - on 15:41:08 [dgarijo] paul: sum up of what has been decided in the telecon 15:41:16 [satya] @Paul: lightweight? 15:41:41 [satya] q+ 15:41:48 [Luc] action satya to ensure all terms of DM appear in prov-o document, to justify why other terms are introduced, and to explain lightweight nature of owl ontology 15:41:48 [trackbot] Created ACTION-40 - Ensure all terms of DM appear in prov-o document, to justify why other terms are introduced, and to explain lightweight nature of owl ontology [on Satya Sahoo - due 2011-10-20]. 15:41:50 [pgroth_] q- 15:41:56 [StephenCresswell] StephenCresswell has joined #prov 15:42:00 [dgarijo] @Luc, thanks for recording it. 15:42:07 [pgroth_] q+ 15:42:13 [Luc] ack sat 15:42:16 [dgarijo] satya: what do we mean by lightweight? 15:42:21 [satya] q- 15:42:45 [satya] @Paul :) 15:42:50 [Luc] ack pg 15:42:58 [Zakim] -[IPcaller] 15:43:23 [dgarijo] pgroth: (too much noise) when we say it's the rdf community means that not doing any crazy modelling of the ontology. 15:43:35 [Zakim] +??P8 15:43:57 [dgarijo] satya: stian has already demonstrated that we can use the ontology for some examples in different scenarios 15:44:03 [khalidbelhajjame] zakim, ??P8 is me 15:44:03 [Zakim] +khalidbelhajjame; got it 15:44:08 [stain] agree 15:44:22 [Luc] topic: Interoperability 15:44:24 [stain] to not scare away people who have been bitten by the massive-owl-full-ontologies 15:44:37 [pgroth_] @stain - exactly 15:44:50 [dgarijo] Luc: we will have to decide what we mean by interoperability 15:44:54 [Luc] 15:45:00 [dgarijo] ... no answers yet 15:45:23 [dgarijo] ... document with some ideas to see if it makes sense. 15:46:10 [dgarijo] ... the document sumarizes the charter and certain aspects: define the data model independant of any toechnology and mapping to certain technologies. 15:46:30 [dgarijo] ... given this, what do we mean by interoperability? Ennumerated 3 types. 15:46:58 [dgarijo] ... it is not a complete set of types of interoperability. It is just to start the discussion. 15:47:31 [dgarijo] ... the first one: not loose any information when changing the representation. 15:47:56 [satya] q+ 15:48:18 [dgarijo] second one: different representation with different forms of inference. What you inferr in one representation is desireable to be inferred in the other one. 15:49:05 [dgarijo] ... thir one: similar to the provenance challenge objective: different provenance systems recording provenance in different formats: end to end interoperability through systems. 15:49:32 [dgarijo] ... terms in different vocaubaries: ns for the datamodel, ns for the ontology. Is it possible to reuse common urls? 15:49:58 [khalidbelhajjame] +q 15:49:59 [dgarijo] ... this might not be complete yet, but enough to initiate the debate. 15:50:05 [Luc] q? 15:50:08 [stain] what is declared in ? "role" and "type" only? 15:50:10 [dgarijo] ... opinions? 15:50:25 [dgarijo] satya: on what context are we discussing this? 15:50:37 [stain] s/prov/prov-dm/ 15:51:23 [dgarijo] Luc: to reach the level of recommendation, we should have at least 2 different approaches that are interoperable 15:52:04 [ericstephan] ericstephan has joined #prov 15:52:32 [Luc] q? 15:52:36 [Luc] ack sat 15:52:43 [dgarijo] satya: data integration. If we are using RDF for the formato of data integration, are we saying that we are going to define a new set of technologies, or reuse the existant ones? 15:52:46 [satya] q- 15:53:06 [dgarijo] khalid: prov language as an interchange format 15:53:32 [Luc] q? 15:53:35 [dgarijo] ... what is expresable with our language that is not with other languages 15:53:37 [Luc] ack kha 15:53:46 [dgarijo] ... and what are our limitations? 15:53:55 [Luc] q? 15:54:05 [dgarijo] +q 15:54:17 [Luc] ack dg 15:54:30 [satya] sorry I have to leave now, will try to follow up on this discussion over mails 15:54:54 [Zakim] -bringert 15:55:10 [Luc] q? 15:55:32 [Zakim] -satya 15:55:51 [pgroth_] echo 15:55:53 [Paolo] q+ 15:55:53 [stain] Zakim, who is making noise? 15:56:00 [pgroth_] no 15:56:01 [stain] mute please 15:56:04 [Zakim] stain, listening for 10 seconds I heard sound from the following: Luc (46%), dgarijo (45%), ??P38 (49%) 15:56:20 [stain] dgarijo: could you put it on IRC as well what you said? 15:56:50 [Luc] q? 15:56:54 [Luc] ack pao 15:57:02 [pgroth_] q+ 15:58:00 [dgarijo] dgarijo: we should not restrict ourselves to the vocabularies that model provenance , but also show examples with some tools (Taverna, wings, etc). We should look for potential clients that can adopt our model. 15:58:05 [Luc] ack pgro 15:58:34 [dgarijo] pgroth: what do we mean by interoperable between 2 serializations? 15:59:05 [dgarijo] pgroth: how do we check that one serialization has used correctly the other one? 15:59:09 [ericstephan] q+ 15:59:12 [dgarijo] ... build test cases 15:59:23 [Luc] ack eri 16:00:16 [dgarijo] eric: how introperable are you? 16:00:36 [stain] convert testcase format a1->b1->a2->b2->a3 - compare a2 and a1 manually, automatically compare b2==b1 and a2==a3 16:01:04 [Paolo] the design of compliance test cases should be central to the interop effort 16:01:41 [dgarijo] eric: seeing R model in owl. 16:01:52 [dgarijo] eric: explain how we did the connections 16:02:00 [dgarijo] ... between 2 serializations. 16:02:22 [Luc] q? 16:02:29 [dgarijo] Luc: the integration aspect is crucial, and potentially can be linked to some docs. 16:02:50 [pgroth_] we have a task force for this? 16:02:51 [pgroth_] no? 16:02:56 [dgarijo] Luc: paolo, do you have an idea of how the compliance test cases should look like? 16:02:58 [khalidbelhajjame] Is it just me who think that talking about interoperability between different serializations of the same model is unusual? 16:03:10 [pgroth_] so maybe we should we start to engage this task force 16:03:31 [stain] I don't think interoperabable X would need to express anything we have in PROV? 16:03:40 [dgarijo] paolo: how do you know that your serialization is compliant with the good one? 16:03:43 [Zakim] -JimMcCusker 16:03:51 [JimMcCusker] gotta run to another call, TTYL 16:03:56 [Luc] q? 16:04:11 [pgroth_] q+ 16:04:19 [khalidbelhajjame] +q 16:04:25 [dgarijo] pgroth: task force on this? 16:05:06 [khalidbelhajjame] -q 16:05:09 [dgarijo] Luc: engage the user task force in this. 16:05:13 [Luc] ack pg 16:05:40 [dgarijo] paolo: is it realistic that we develop something to check compliance to the prov model? 16:05:41 [khalidbelhajjame] +q 16:05:54 [khalidbelhajjame] -q 16:05:56 [stain] could we have something similar to a validator? 16:05:57 [dgarijo] Luc: not our job to give certification of compliance. 16:06:00 [stain] it could check the constraints etc 16:06:08 [Zakim] -Curt_Tilmes 16:07:06 [dgarijo] khalid: (too much noise) some ideas to check if a model is compliant? 16:07:28 [dgarijo] @khalid: can you summarize, please? I coulnd't hear you well. 16:08:13 [dgarijo] stian: check that something complies with a set of provenance assertions. It could be syntatic&semantic validation. 16:08:14 [khalidbelhajjame] I think it is not difficult to check the conformance to the model, but it is harder to check that the system use and use correctly the provenance 16:08:19 [Paolo] def need to go further than syntax validation 16:08:32 [dgarijo] @khalid: thanks! 16:08:40 [Luc] q? 16:08:52 [pgroth_] q+ 16:09:35 [dgarijo] paul: those who have proposals of what interoperability should be, raise discussions on the mailing list 16:09:46 [Luc] or keep on editing the wiki page ... 16:10:05 [pgroth_] q- 16:10:27 [dgarijo] Luc: action on Paul and Luc to engage the user task force. 16:10:47 [khalidbelhajjame] Ok 16:10:50 [Luc] q? 16:10:55 [dgarijo] Luc: please edit the wiki page if you have further proposals. 16:11:04 [Zakim] -pgroth_ 16:11:06 [Zakim] -??P38 16:11:07 [Zakim] -khalidbelhajjame 16:11:07 [Luc] exit 16:11:09 [Zakim] -SamCoppens 16:11:09 [Luc] quit 16:11:10 [Zakim] -dgarijo 16:11:12 [Zakim] -stain 16:11:58 [Zakim] -??P4 16:12:45 [Luc] Luc has joined #prov 16:12:54 [dgarijo] Hi luc 16:13:10 [Luc] hi daniel, thanks for scribing, i'll deal with it from now 16:13:10 [dgarijo] are you going to save the log? 16:13:18 [dgarijo] thanks 16:13:25 [dgarijo] see you ! 16:13:29 [Luc] rrsagent, set log public 16:13:41 [Luc] rrsagent, draft minutes 16:13:41 [RRSAgent] I have made the request to generate Luc 16:13:42 [Luc] trackbot, end telcon 16:13:42 [trackbot] Sorry, Luc, I don't understand 'trackbot, end telcon '. Please refer to for help 16:18:23 [MacTed] MacTed has joined #prov 16:25:27 [stain] trackbot, end telcon 16:25:27 [trackbot] Zakim, list attendees 16:25:27 [Zakim] As of this point the attendees have been Luc, khalidbelhajjame, dgarijo, pgroth_, stain, Curt_Tilmes, bringert, +1.509.375.aabb, SamCoppens, satya, JimMcCusker, [IPcaller], 16:25:28 [trackbot] RRSAgent, please draft minutes 16:25:28 [RRSAgent] I have made the request to generate trackbot 16:25:29 [trackbot] RRSAgent, bye 16:25:29 [RRSAgent] I see no action items 16:25:31 [Zakim] ... Yogesh_Simmhan
http://www.w3.org/2011/10/13-prov-irc
CC-MAIN-2014-35
refinedweb
4,380
62.58
Technique to Create Dialogs from Images WEBINAR: On-Demand Building the Right Environment to Support AI, Machine Learning and Deep Learning This article describes a technique for creating dialogs with any form you want for them. In order to define the form of the dialog you only have to create a bitmap with your usual graphics programm (say Photoshop or Microsoft Paint,...). In this bitmap you have to paint the form of your dialog using as many colors as you want (it can be RGB or paletted!), but remembering to choose one color to be interpreted as the only transparent (or only opaque) color. It can be any color you want, but you have to know its RGB-components, because they will be one of the parameters to pass to the BitmapRegion() function, that makes all the hard work for you! The other arguments of this function are the bitmap handle, and a boolean argument which tells the function to consider the passed color as transparent or opaque! What this function returns is a handle to a region object "hRegion", which can, and should, be passed to the member function of CWnd: SetWindowRgn(hRegion), in order to complete the definition of the form of the dialog. Sample 1 // hBitmap is the handle to the loaded bitmap // Here, the transapatent color is black. All other colors // in the bitmap will be considered opaque hRegion=BitmapRegion(hBitmap,RGB(0,0,0)); if(hRegion) SetWindowRgn(hRegion,TRUE); // TRUE=repaint the window! Sample 2: // hBitmap is the handle to the loaded bitmap // Here, the opaque color is yellow. All other colors in // the bitmap will be considered transparent // The last argument tells the function to interpret the // passed color as opaque. If this argument is not present, // the color is the trasnparent color! hRegion=BitmapRegion(hBitmap,RGB(255,255,0),FALSE); if(hRegion) SetWindowRgn(hRegion,TRUE); // TRUE=repaint the window! This technique provides two ways of using the mentioned function. One way is to use it for defining the clipping region for the dialog, as explained in the previous code. This function should be called from the OnCreate() message handler of the dialog, where the bitmap has to be loaded (further explanation below!). With this option the background of the dialog has the plain color of all the dialogs in the system. That is, the dialog has a special boundary-form, but it has the same color and appearance as all standard dialogs. You can place the controls with the Ressource Editor and work with them exactly like you would do with a normal dialog. You only have to pay attention to the position of the controls, because they have to be placed on the opaque areas of the bitmap in order to be visible! The other way to use this technique is to get a completely owner drawn dialog. Not only the boundaries of the dialog are computed from the bitmap, but also the background image of the dialog! These option needs a bit more code to get done, but it isn't complicated and the effect is really cool (you can use this for the About... dialog of your application!) With this option the background is filled with the bitmap. The controls (if you put some on the dialog) are drawn over the bitmap, so if you want them not to appear, put them on transparent areas of the dialog, or simply put them away! In the sample project, I use this option with a dialog in which I let no control be shown (in fact there remains only an "Ok" button!) and therefor I have to be able to close the dialog in another manner. The solution is to catch the clicking of the user over a special area of the bitmap. I painted a yellow cross on the bitmap and annotated the coordinates in pixels of the bounding rectangle of the cross. When the user clicks within this area, the dialog is closed with EndDialog(). In order to get the coordinates of the pointer in comparison with those of the mentiones area, the dialog has to have no caption- or title-bar, and it has to have no border! (study the sample project for a closer explanation.) Another feature implemented in this sample project for the dialog with the bitmap painted on the background, is that the user can dragg the dialog clicking anywhere on it (except on the yellow-cross which closes the dialog!). In order to achieve this effect, You have to override the OnNCHitText() message handler of the dialog and return HTCAPTION everytime the user doesn't click on the yellow-cross. In this case you return HTCLIENT, in order to let the system send the WM_LBUTTONDOWN message to the dialog! In either case (you want your bitmap to be painted on the background or not) you have to insert the following line in the .cpp file of your dialog implementation: #include "AnyForm.h" You have to include the files AnyForm.h and Anyorm.cpp in your project, in order to get things working! Explanation 1: The bitmap is used only to compute the clipping region of the dialog This option is shown in the previous photo called "Without". In this case the code is rather simple. You have to override the OnCreate() message handler of the dialog, and insert a few lines of code, in order to obtain something like this: int CMyDlgWithout::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CDialog::OnCreate(lpCreateStruct) == -1) return -1; // In this case, we don't need to store the loaded bitmap // for later use, because we don't put it on the background. // We use it only to compute the clipping region and after // the region is set on the window, we can free the loaded // bitmap. That's the reason whiy in this case there is no // need of member varibles, memory contexts,... HBITMAP hBmp; HRGN hRegion; // The name of the bitmap to load (if not in the same // directory as the application, provide full path!) // If the bitmap is to be load from the ressources, the name // stands for the ressource-string identification char *name="BitmapWithout. // // A sample where the red color is interpreted as the // opaque color... // // hRegion=BitmapRgn(hBmp,0x00FF0000,RGB(255,0,0),FALSE); //); // After this, because in this case we don't need to store // the bitmap anymore, we can free the ressources used by // it. We also don't need here to select the bitmap on the // memory context. In fact, we don't need any memory // context!!! So: // Delete the bitmap the we loaded at the beginning if(hBmp) DeleteObject(hBmp); // And in this case, that's all folks!!! return 0; } Explanation 2: The bitmap is used to compute the clipping region of the dialog, and to be painted on the background of the dialog This option is shown in the previous photo called "With". This case is a little more complicated, but it can be better understood through the code of the sample project. At first, you have to insert a few member variables in the dialog-class that controls your dialog. The class definition would look similar to this: class CMyDlgWith : public CDialog { ... // Implementation protected: HBITMAP hBmp; HBITMAP hPrevBmp; HDC hMemDC; HRGN hRegion; BITMAP bmInfo; ... }; You have to override the message handlers for the following messages: WM_CREATE, WM_DESTROY and WM_ERASEBKGND. The code for these functions would look like this: int CMyDlgWith::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CDialog::OnCreate(lpCreateStruct) == -1) return -1; // The name of the bitmap to load (if not in the same // directory as the application, provide full path!) // If the bitmap is to be load from the ressources, // the name stands for the ressource-string identification char *name="BitmapWith); // We ask the bitmap for its size... GetObject(hBmp,sizeof(bmInfo),&bmInfo); // At last, we create a display-compatible memory context! hMemDC=CreateCompatibleDC(NULL); hPrevBmp=(HBITMAP)SelectObject(hMemDC,hBmp); return 0; } void CMyDlgWith::OnDestroy() { CDialog::OnDestroy(); // Simply select the previous bitmap on the memory context... SelectObject(hMemDC,hPrevBmp); // ... and delete the bitmap the we loaded at the beginning if(hBmp) DeleteObject(hBmp); } BOOL CMyDlgWith::OnEraseBkgnd(CDC* pDC) { // If you only want a dialog with a special border-form, // but you don't want a bitmap to be drawn on its background // surface, then comment the next two lines, and uncomment // the last line in this function! BitBlt(pDC->m_hDC, 0, 0, bmInfo.bmWidth, bmInfo.bmHeight, hMemDC, 0, 0, SRCCOPY); return FALSE; // return CDialog::OnEraseBkgnd(pDC); } The last thing to do, is to provide the dialog a way to get closed by the user. As I mentioned before, the best thing is to draw a special figure anywhere on the bitmap and to catch the clicking of the mouse in this area, by catching the WM_LBUTTONDOWN message of the dialog. Another feature can be easily implemented: permit 'click and drag' the dialog clicking anywhere on it! This can be achieved by catching the WM_NCHITTEST message. The following code shows how this is implemented: void CMyDlgWith::OnLButtonDown(UINT nFlags, CPoint point) { // If the user clicks on the 'x'-mark we close the dialog! // The coordinates of the rectangle around the 'x'-mark // are in pixels and client-coordinates! // That's the reason why the dialog ressource has to have // no border and no caption bar. if(point.x>333 && point.x<354 && point.y>54 && point.y<77) EndDialog(0); CDialog::OnLButtonDown(nFlags, point); } UINT CMyDlgWith::OnNcHitTest(CPoint point) { ScreenToClient(&point); // Because "point" is passed in screen coordinates // (Windows knows why?!) we have to convert it to client // coordinates, in order to compare it with the bounding // rectangle around the 'x'-mark. If the point lies within, // then we return a hit on a control or some other client // area (so that the OnLButtonDown-message can be sent, and // afterwards close the dialog!), but in all other cases, // we return the information as if the user always would // have clicked on the caption bar. These permitts the user // to drag and move the dialog clicking anywhere on it // expect the rectangle we considere here! if(point.x>333 && point.x<354 && point.y>54 && point.y<77) return HTCLIENT; else return HTCAPTION; } Demo project The demo project shows the two ways of using this technique. The source code has comments in order to get better explained. Source code The source code comprises only the two following files, that you have to include in your project: AnyForm.h and AnyForm.cpp. DownloadsDownload source - 4 Kb Download demo project - 115 Kb There are no comments yet. Be the first to comment!
https://www.codeguru.com/cpp/w-d/dislog/bitmapsimages/article.php/c5055/Technique-to-Create-Dialogs-from-Images.htm
CC-MAIN-2018-39
refinedweb
1,766
59.84
On Thu, 5 Feb 2004 22:55:36 -0500 Aron Griffis <agriffis@g.o> wrote:> Marc Giger wrote: [Wed Feb 04 2004, 06:13:30PM EST] > > Can someone of you comment why the following patch is needed? > > > > diff -ruN glibc-2.3.2.orig/include/features.h > > glibc-2.3.2/include/features.h--- > > glibc-2.3.2.orig/include/features.h 2003-06-14 00:28:23.000000000 > > +0100+++ glibc-2.3.2/include/features.h > > 2003-06-14 00:58:57.000000000 +0100@@ -285,7 +285,8 @@ > > #if defined __GNUC__ \ > > || (defined __PGI && defined __i386__ ) \ > > || (defined __INTEL_COMPILER && (defined __i386__ || defined > > __ia64__))\- || (defined __STDC_VERSION__ && __STDC_VERSION__ >= > > 199901L)+ || (defined __STDC_VERSION__ && __STDC_VERSION__ >= > > 199901L)\+ &&!(defined(__DECC) || defined(__DECCXX)) > > # define__GLIBC_HAVE_LONG_LONG 1 > > #endif > > > > The Compaq C Compiler knows the "long long" datatype like gcc does. > > It's also of the same size on both compilers > > (long == long int == long long == 8bytes). > > You're right, this looks broken to me. Oh, while we are at it. What do you think if we move the libots libs to /lib instead of /usr/lib ?. greets Marc
https://archives.gentoo.org/gentoo-alpha/message/dcc260a632cf00a2f7a8228c6633d66d
CC-MAIN-2015-27
refinedweb
177
69.07
Derek Martin wrote: > But it IS there... so what's the problem? The presumption that it is. > A simple minimal ESMTP > engine might be more convenient -- and numerous solutions for that are > available for mutt -- but being able to choose to use a full-fledged > MTA like sendmail offers the user (or system administrator) a great > deal of power. What you're missing is that by simply putting "localhost" those people lose nothing in the deal. Not a damn thing. It isn't an either or situation by a long shot. > In some ways, it's a benefit: numerous SMTP engines already > exist, so not including one makes Mutt easier to debug and maintain, > which make its overall code quality better. This is the Unix > Philosophy. Everyone keeps saying this without thinking it. Then why is it there are hundreds of other clients for dozens of other protocols which don't have a similar breakdown in "transport" versus "viewer" and noone complains they are so complex. Sorry, whenever people use this cop out it is because they just don't want to think it through. They think that somehow mail is the only protocol in the whole internet which must be broken down. It is a religious argument, one backed more by faith than thought. > It also provides the flexibility of allowing the user or > system administrator to choose the SMTP engine they prefer, rather > than forcing you to use theirs, which you may not like at all. Again, "localhost" and you lose... nothing. >)... Not quite. A patch involves patching and recompiling mutt. The extensions to Thunderbird come with their own installer. There's a world of difference between having to install a compiler, patching source and spending however many minutes to recompile versus clicking on a link in a browser, downloading an installer then clicking on that installer in Thunderbird. >. And, oddly enough, those questions fail more often then not. Know how many times the exim4 scripts worked for me? 0. And I have a simple setup. "Internet host" > Fetchmail does a fine job of retrieving the mail. No, it doesn't. Since it has to feed it through an MTA and does no filtering. > As far as procmail being line noise, that's just poppycock. No, it's not. Compared to exim's filtering it *is* line noise. # Debian-user# Debian-user if $h_List-ID: contains "<debian-user.lists.debian.org>" then save Mail/debian-user endif What does procmail's filter start with? :0 Yeah, how could anyone mistake that for line noise is beyond me. Clearly it means.... uh... it means... colon zero? No, wait...... Isn't about regular expressions, bucko. Let's see, pushing 10 years of experience in Perl (which looks like Line Noise), 3 of which professional at a major ISP. 5+ years in Python. My favorite tool is Regex since I do a ton of text processing. Regex isn't the problem. It is the syntax outside of Regex that's the problem. >. Really. And you presumed I didn't know Regex because....... oh, you just assumed. > Well, the numbers are silly, but the point is Mutt will not break > until long after any of your favorite GUI programs will. Which is long after my needs and that of pretty much every person in existence. > Please try > to remember that Michelle is not a native English speaker, and is > trying to make his points as best he can in a language which is not > his first. Actually I think he did a fine job. Did I comment on his language once? Nope. I didn't assume anything, like you. > Here again you display your unwillingness to learn how to use your > tools. Mutt's various hooks offer immense power; Here's your assumption again. I am well aware of how the hooks work. I really dislike the fact that there's no inheritance. Amazingly enough if you look at Sylpheed-Claws they get it right. The folders inherit the default settings from the parent folder but can be configured in a variety of ways; almost as much as mutt offers. In that way one *can* set up the folders individually, one just doesn't *have* have to set up the folders individually. > they can give you > the same functionality as "personalities", though you need to think > about it a little differently. Personalities are the bane of mail. Eudora and Pegasus mail should be shot for using Personalities and ingraining that flawed meme on the collective Internet mind. > It's a LITTLE more complex to set up than > filling out a "personalities" dialog in some GUI, but in return you > get an ENORMOUS amount of additional power. No, it's more complex because of the lack of inheritance. Congratulations, you're not catching to where I've been for years. Tell me how exactly getting defaults from a parent folder causes problems so long as the folders themselves are individually configurable. Remember that the simple case should be made simple and the vast majority of people, when they create a new folder, would like the same settings as whatever that folder is under. > Sorry, but no you don't. folder hooks match patterns of folder names, > so you can have a folder hook matching Mail/account1/.* and all mail folders > under that will match. There's your inheritance. > Man, it sucks to be wrong, after being so adamant, doesn't it? Ahhh, yes, the old "filter out and screw up from there" routine. Don't buy it. I mean why have to filter things out when it should be done from the onset. Sucks to think you're right and slip up, huh? > I've thus far resisted the overwhelming temptation to call you > names... but frankly I think you deserve to be called names. You are > being extremely arrogant, and you're just plain wrong. Sorry, but when someone lies I call 'em on it. What, people can only call other people liars when the accused's last name is "Bush"? >> As pointed out you can't do SMTP, you need to configure multiple >> things externally so my second case is what you do. > 1. Yes, you can do SMTP with the ESMTP patch. Recompile, non-consideration. > 2. Yes, you can do SMTP with sendmail, or a different MTA, if you > prefer. Which can be done with "localhost" in the smtp setting. > 1. Use different configurations of Mutt on different machines (maybe > log in remotely over VPN to a machine at work, or whatever) Yeah, that works. Memory footprint goes up and now you have multiple windows to have to track instead of one. We're talking about a single client here, not multiple. > 2. Mutt's configuration can be entirely generated by a program. Use > such a program to generate your config based on where you are. I have > done this. Er, I mean I am doing this. Wow, and did you know you could edit Thunderbird's by hand! Catching up! > 3. Use a boot script to reconfigure your laptop's MTA based on the > network configuration (IP address, hostname, etc.) it gets at boot > time. I have done this in the past as well. Yes, because that's far simpler than a single line in the account's setup. Plus this does not address the problem of home mail going over work's MTA or vice versa. >. Which I have. 3 years as a mail admin, thanks. My point has never been that it isn't possible. My point is that it is not even remotely as simple as people claim and that there are problems involved with the process. >. I'm all for more power in the end user's hands. I'm just for more power in the end user's hands while providing them a simple and sane way to do the most common things QUICKLY AND EASILY. It shouldn't take several hours to be able to send an email or months to set up a decent way of reading mail. That should be minutes after install. So no, I will not "shut up" just for your convenience. Don't like the facts, tough. They're immutable. -- Steve C. Lamb | But who decides what they dream? PGP Key: 8B6E99C5 | And dream I do... -------------------------------+--------------------------------------------- Attachment: signature.asc Description: OpenPGP digital signature
https://lists.debian.org/debian-user/2006/09/msg00041.html
CC-MAIN-2014-10
refinedweb
1,395
75.5
Looks like the Fire Brigade have switched to Digital P25 in Auckland. Any one that can confirm? There was a large 24hr outage of Police and Fire dispatch systems last week for a major upgrade. During this time fire operated with no selcalls and manual disptahcn and paging for the duration of this upgrade. Part of the upgrade was to improve reliability of the selcall system. Yep thanks they were very quite last night nothing for over an hour so I asked my mate if they moved to encryption. He said the other week they got new radios like the Police have and was have most likely changed over. But turns out they were only analogue 8200 not 9100's so still operating on analogue. Virgil: I was just up in Auckland yesterday, and it seems like they have moved away from VHF and gone to UHF. I heard nothing on the usual 76.375 but good and strong on 486.950 (Sky tower) Cheers There has been a repeater somewhere around there transmitting on FM too. Used to mess with my radio before i had the aux line fitted.. #include <std_disclaimer> Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have.
https://www.geekzone.co.nz/forums.asp?forumid=48&topicid=242586&page_no=1
CC-MAIN-2018-51
refinedweb
213
81.63
Hi, I am having serious problems running a very simple example of qt-mobility multimedia (Audio element). My system is WinXP and I am using the latest QtSDK. (1.1.2) I built qt-mobility 1.2.0 successfully following the instalation page: I added as suggested in many sites to my path: C:\QtMobility\%TARGET_DIR%\lib; C:\QtMobility\%TARGET_DIR%\bin; C:\QtMobility\%TARGET_DIR%\include; I added: CONFIG += mobility and MOBILITY += multimedia to my.pro file I have in my qml file: import Qt 4.7 and then import QtMultimediaKit 1.1 at the top. The simulator runs with a blue screen and then I cannot close the simulator windows, it seems the process never ends. I am just getting: "Could not connect to mobility server" in the general Messages section (Below "compile opuput" view). Please, I am really going crazy with this problem, many hours to get the qt-mobility api on my system and I really need to use the multimedia module. I don't know what else can I do. A helping hand would be really appreciated. Thanks Juancito Screenshot :
http://developer.nokia.com/Community/Discussion/showthread.php/227605-Simulator-stops-when-running-qt-mobility-simple-example
CC-MAIN-2013-48
refinedweb
183
50.33
Can someone explain to me this code please? I am just learning Swift/SwiftUI but getting frustrated with classes. I understand what the outcome is but don’t get why it’s written the way it is. Looking on Google to learn about classes and it’s just the same stuff over and over, no one has real world examples. If there are any course on here that will help then please let me know. I have iOS Apprentice but not looked at it for a while. import Foundation class OrderListViewModel: ObservableObject { @Published var orders = [OrderViewModel]() init() { fetchOrders() } func fetchOrders() { Webservice().getAllOrders { orders in if let orders = orders { self.orders = orders.map(OrderViewModel.init) } } } } class OrderViewModel { let id = UUID() var order: Order init(order: Order) { self.order = order } var name: String { return self.order.name } var size: String { return self.order.size } var coffeeName: String { return self.order.coffeeName } var total: Double { return self.order.total } }
https://forums.raywenderlich.com/t/classes-are-confusing-me-beginner/113956
CC-MAIN-2022-33
refinedweb
156
69.28
Live Production Clojure Application Announced It was announced recently on the Clojure Google Group that a hospital services system has been developed, in part using Clojure, and has been put into live production use in a major veterinary hospital. The product appears to use several languages and technologies, but Clojure appears to play an important role. This announcement carries some signficance, as it is one of the first published reports of Clojure being used in a large-scale production deployment, particularly one as sensitive as a hospital environment. As a language, Clojure is relatively young, having only been under development for a few years. The core of the product is an HL7-compliant message bus. The routing and archiving of messages, as well as the fault tolerance and error handling of the bus are all controlled by Clojure: We designed an HL7 message bus to link several services within the hospital. Presently we link the medical record system with the radiology department. The main benefit is to avoid re-keying information about patient and requests in every system. We also provide some key applications on the bus to allow people to share information in a consistent way along the system they use on a daily basis. It's like a Babel tower, radiologists want to work with their radiology system while the administration wants to work with the medical record system to bill, ... each of these systems meet specific needs of a user base. However there is a need for a common ground to share information. That's what our product offers. There are a number of other technologies and specifications listed as used in the application, including: The Clojure language has been built with an emphasis on concurrent programming. With a software transactional memory model and a threading dispatch framework referred to as the agent system, Clojure offers a number of features to communicate state between threads in a consistent and safe manner. More information about Clojure is available on the main website, and the Clojure google groups are the best place to follow community adoption and other announcements. 2,000 transactions an hour by Russell Leggett Re: 2,000 transactions an hour by Kurt Christensen Ruby on Rails by John Wells Sorry to sound fanboi-ish...but I'm working on a Grails app currently and loving it! Re: Ruby on Rails by Stefan Tilkov Re: Ruby on Rails by Stefan Tilkov Re: 2,000 transactions an hour by Luc Prefontaine ... 2,000 transactions an hour represent only that primary data flow. In fact the whole system carries much more traffic since all these events trigger a number of processing steps that themselves generate traffic. You can easily multiply the above number by a factor of 6 in terms of message passing in the whole system. Later this year the traffic will increase a lot since we will hook other systems to the bus (labs, pharmacy, ...) and do not expect to change the hardware. It has been sized accordingly, we are not dealing with a financial system here, we dealing with live stock (it's a vet hospital) Find me an hospital where you would handle 800 admissions an hour... Maybe in the US they have these big hospitals but here were not that big. A vet hospital works in a similar way compared to a human centric hospital, the equipments are the same, administration softwares are almost identical in terms of functions and work flow. BTWY, I did not expect this to make the news so fast :))) if anyone has other questions, I'll track this post. In the next couple of months we will have a website describing the product. Luc Préfontaine, SoftAddicts Inc. Re: Ruby on Rails by Luc Prefontaine year ago and we wanted to keep some of that low-level investment. Java in our system lies where it belongs, at the bottom of the language stack like assembly languages used to be. Since Clojure and JRuby allow access to Java from anywhere and especially regarding Clojure in a very non obtrusive way, we find our technology choices very sound. The chosen technologies are used in the proper spot. We do not believe that a single language can hammer all the nails of the world... We disliked the idea of using Struts to spend weeks create a management tool. With Rails and ActiveScaffold this shrank to days and allowed to us to change the management app on a dime when required. We may look at Grails but you must realize that we need somehow "mature" technologies, not elderly ones, not the ones that are in their infancy but something in between and of course there has to be some interesting payoffs along the way. Clojure was an obvious choice of us because of the distributed nature of the product. Terracotta for the same reason was also an obvious choice. We find that these two technologies have a lot of synergy potential The 1.0 version of Clojure should be out pretty soon but the pre v1.0 releases were so rock solid that we did not have any fears of using it for a real application. Luc Préfontaine, SoftAddicts Inc. Re: Ruby on Rails by John Wells I understand your points. I suggest you pick up the latest copy of the Definite Guide to Grails. When I read the first edition over a year ago, I felt the project wasn't mature enough. The second edition has definitely changed my mind. And no knock against JRuby here...I use it a good bit as well. But if you want a Rails-like framework with a Ruby-like dynamic language that maintains the level of Java integration that Clojure has, Groovy/Grails is the way to go. This is one area in which JRuby simply can't compete. Hope this helps. Re: Ruby on Rails by Luc Prefontaine a few screencasts. Bottom line, too much code to write compared to Rails/ActiveScaffold/ActiveRecord. We get list/search/edit/delete/... screens with 10 to 20 lines of configuration in Ruby. No need to defined gsp files, ... As far as JRuby access to Java, it compares favorably with Clojure. Clojure does it better however mainly because of the syntatic sugar added to it and the choice made to use some Java data types as Clojure types. We mainly use JRuby to run Rails in a Java server app. So Java access is rarely an issue. Clojure has its own data types and that's fine otherwise there would not be any advantage of creating a new language. Some level of conversion is required from time to time (Java arrays are not Clojure arrays) but that's not a big deal if you confine Java calls to low-level stuff. That's the key point here, we want to exit the Java world as much as possible because of these thousands of configuration and generated code lines required to accomplish anything significant in Java. We keep Java stuff confined and emphasize using more expressive languages to get the job done in less time that's a productivity gain. Clojure do this quite well for us. It's a very good fit for concurrency and it's use of immutable data by default makes it a lot more resilient to programming mistakes. It allows us to use Java libraries when required and that's more than enough. We do not want to code in Java, just bridge with it when necessary. A Clojure equivalent to ActiveRecord is own it's way and that will allow access to databases in a dynamic fashion which is exactly what we expect from a language like Clojure. As the product matures we will avoid having any business logic in the Java code and keep it in the Clojure world. Luc SoftAddicts Inc. Re: Ruby on Rails by John Wells > screens with 10 to 20 lines of configuration in Ruby. > No need to defined gsp files, ... Again, Grails is the same or better in this regard. To get all of the above in Grails, you just specify: def scaffold = ModelClass in a controller and your done. Don't want to use the default? Want all of the scaffold for free yet be able to modify it? $ grails generate-all from the shell. This will generate all you need yet let you edit it. I don't think a few screencasts is giving you enough info. I really suggest picking up the Definitive Guide to Grails (2nd ed) and giving it a look. Still, if Rails is working for you and it's done, great! Good luck! Re: Ruby on Rails by Mark Wutka Are you using Clojure and Terracotta together, or are they used in separate places? It would be really exciting to find out that you are doing STM with Clojure and Terracotta. Mark Re: Ruby on Rails by Daniel Sobral How do you few STM has handled performance-wise? And, on the other hand, did STM bring anything worthwhile to the project? Re: Ruby on Rails by Luc Prefontaine custom Java classes. That's one reason why we would like to get the Clojure universe shared on multiple VMs instead of having to use a Java layer. It will be easier to share stuff directly once the Clojure runtime is adapted to fit with a terracotta configuration refering to Clojure's internal Java classes. The goal is to be able to use STM, atoms and all the nice stuff directly from Clojure transparently. It's on it's way but we are a bit stretched since before Xmas. We try to get stuff prototypes ASAP, after that the modifications have to be merged in the Clojure distribution. Luc Re: Ruby on Rails by Luc Prefontaine need that Clojure/Terracotta integration to open the way to better parallelism. We have to preserve ordering of the events fed to the bus from the systems attached to it. To circumvent this and increase parallelism, we need a better data representation than a huge bunch of ActiveMq queues. Managing multiple queues to implement this looks to us like a night mare. Instead of building it from Java with x000 lines of code we are betting on the Terracotta/Clojure integration. To us it looks like a better investment for the future of everyone, us, Clojure and software in general. The Terracotta team has done an impressive job. I used to work with VMS clusters in the 80s/90s. VMS engineers released Galaxy in the 90s where you could build a virtual machine from chunks of physical computers including memory, disks, ... and I find Terracotta quite inspired by these ideas. As soon as the prototype lifts off, we will test STM on a large scale. I read about pros and cons of STM but I think it's more a choice of field experimentation thank real obstacles. I worked with Ingres when relational databases came out of universities and started to emerge in the commercial market. I remember working hard to find out about the behavior of the optimizer. Query plans where primitive, the order of your where statement clauses would have an impact on the optimization engine and so on. It was more or less a blackbox. My personal thinking is that these things can improve over time while not breaking your code entirely. It proved to be true with relational databases over the last 25 years. With the traditional lock model, well your code will break, get fixed, then break in the next release, then... Having worked with thread and pre-thread models, I do not see any improvement in this area over years. Coding a heavy multi threaded system today remains as difficult as it was 10 years ago. I'm talking about systems with more than 20 threads here all interacting on the same data structures, not a typical Swing application... With STM we need to experiment, we need to use production ready tools to support experimentation and we need "real" problems to find out about limitations otherwise it remains an academic exercise. Rich Hickey has put serious thinking in this area and came out with an implementation in Clojure that merits stressing it and see how far it can be pushed. Terracotta has real good product for sharing stuff and we think that both softwares are a good fit. I think it's worth a serious attempt to make it work on a large scale.
http://www.infoq.com/news/2009/01/clojure_production
CC-MAIN-2015-18
refinedweb
2,083
62.78
Ralf Baechle wrote: > > On Tue, Nov 27, 2001 at 01:04:06PM +0900, Atsushi Nemoto wrote: Not a bad idea in context of fish and chips. > > Well, talk to it's developers before it's too late. Or as it has already > happened for some hardware I think we should simply go with your > suggestion and make all those functions vectors. > I don't know about the details of function vectors, but would imagine it would be 1) slow, 2)clumsy with extra layer of abstraction and 3) intrusive to the most common cases which is the current io file. My suggestion is to have a separate io file for the board, and then add the following to the beginning of io.h: /* for tasteless boards */ #if defined(CONFIG_TOSHIBA_TASTELESS) #define _ASM_IO_H /* so that we don't include the rest */ #include <asm/toshiba/tasteless_io.h> #endif This way not only we have minimum impact to the majority, but also easier to remove such ad hoc support when toshiba becomes more tasteful. Jun
https://www.linux-mips.org/archives/linux-mips/2001-11/msg00277.html
CC-MAIN-2016-50
refinedweb
171
65.35
write a binary UNC meta image data More... #include <vtkMetaImageWriter.h> write a binary UNC meta image data One of the formats for which a reader is already available in the toolkit is the MetaImage file format. This is a fairly simple yet powerful format consisting of a text header and a binary data section. The following instructions describe how you can write a MetaImage header for the data that you download from the BrainWeb page. The minimal structure of the MetaImage header is the following: NDims = 3 DimSize = 181 217 181 ElementType = MET_UCHAR ElementSpacing = 1.0 1.0 1.0 ElementByteOrderMSB = False ElementDataFile = brainweb1.raw MetaImage headers are expected to have extension: ".mha" or ".mhd" Once you write this header text file, it should be possible to read the image into your ITK based application using the itk::FileIOToImageFilter class. Definition at line 73 of file vtkMetaImageWriter.h. Definition at line 76 of file vtkMetaImageWriter.h. Return 1 if this class is the same type of (or a subclass of) the named class. Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h. Reimplemented from vtkImageWriter. Reimplemented from vtkImageWriter. Methods invoked by print to print information about the object including superclasses. Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes. Reimplemented from vtkImageWriter. Construct object with FlipNormals turned off and Normals set to true. Specify file name of meta file. Reimplemented from vtkImageWriter. Specify file name for the image file. You should specify either a FileName or a FilePrefix. Use FilePrefix if the data is stored in multiple files. Reimplemented from vtkImageWriter. Definition at line 88 of file vtkMetaImageWriter.h. Specify the file name of the raw image data. Specify the file name of the raw image data. Definition at line 98 of file vtkMetaImageWriter.h. Definition at line 102 of file vtkMetaImageWriter.h. The main interface which triggers the writer to start. Reimplemented from vtkImageWriter. Definition at line 115 of file vtkMetaImageWriter.h. Definition at line 117 of file vtkMetaImageWriter.h.
https://vtk.org/doc/nightly/html/classvtkMetaImageWriter.html
CC-MAIN-2019-39
refinedweb
351
61.43
Created on 2019-12-27 18:15 by twister68@gmail.com, last changed 2020-01-03 22:16 by terry.reedy. This issue is now closed. Trying to set up shortcut function to clear screen but its not working as expected on my Mac OS Catalina -- below is txt from idle import os >>> cls= lambda: os.system('clear') >>> cls() 256 Unfortunately, this is not going to work as you expect because you are mixing commands for different windowing systems. The OS-level 'clear' command is used to clear a normal terminal window by issuing special character sequences to standard output that are recognized by the terminal emulator, like for instance Terminal.app on macOS. When you run commands using os.system() under IDLE on macOS, the standard output descriptor for the subprocess created by os.system is not handled by IDLE. Currently open Issue11820 describes this problem. You are apparently running IDLE.app (by launching the app from the Finder) and the command fails because standard output of the subprocess is not associated with a terminal window. If, instead, you launched IDLE from a terminal window (in Terminal.app): /usr/local/bin/idle3.8 and ran your test, you would see that the 'clear' clears the Terminal.app window, not the IDLE shell window (and now returns a status of 0), still not what you are looking for. Even if Issue11820 is implemented such that the standard output from subprocesses are associated with the IDLE shell window, there would still be a problem in that the special character sequences sent by the 'clear' command would most likely not be recognized by the IDLE shell window or its underlying Tk widget, so using the OS 'clear' command still would not work. A better solution is for IDLE to provide its own 'clear' command and that is the subject of open Issue6143, which this issue should probably be closed as a duplicate of. Among other problems, the patch for #11820 uses a unix-only tcl function. In non-shell editor and output windows, Select all (Control-A at least on windows) Delete (or Backspace, or Cut to clipboard, ^X on Windows) clears the window. Delete does not work for the read-only history portion of the Shell, hence the request for a separate menu item. However, if an editor window is open, one can still get a clean Shell by closing Shell and re-opening with Run => Shell or running the file being edited.
https://bugs.python.org/issue39141
CC-MAIN-2021-10
refinedweb
412
62.17
Namespaces in Yii 2.0? #21 Posted 13 June 2011 - 10:09 AM Namespaces are in deed useful - although they don't meet the same requirements as namespaces in statically compiled/linked languages, that is a natural deficiency stemming from the dynamic nature of the PHP language, and not a shortcoming, as I had originally thought. With regards to Yii, the fact of the matter is that, like many other frameworks, Yii currently "emulates" namespaces, which is not longer necessary (or good) since PHP now has native syntax and support for "real" namespacing. Moving away from the "emulated" namespaces and making use of native namespacing in PHP gets my two thumbs up! We live and we learn... :-) #22 Posted 06 August 2011 - 02:43 PM qiang, on 27 December 2010 - 07:50 PM, said: +1 Arguments like "I came form C++/.Net/Python" are cheap cause PHP is different by ideology. We got all native functions and classes in the global namespace, as well as 99.9% code. The classes are the namespaces in PHP. The main reason to use namespaces is to avoide conflicts when integrating two or more ready-made applications. If you use namespaces a lot, you end up copy-pasting a long line like use somenamespace\class, somenamespace\class2,somenamespace\class3,\Date,\ArrayObject,\ArrayAccess in every file of your application. You will get happy debugging with behaviours attached from different namespaced objects, and with lambda callbacks. NS are neither good nor bad. They are just not a language primitive you find in C/Java/.Net/Python #23 Posted 06 August 2011 - 04:01 PM #24 Posted 06 August 2011 - 06:01 PM jacmoe, on 06 August 2011 - 04:01 PM, said: It's not resistance, it's real life. NS are not related to includes, no packages or other resorces involved, no wildcard name imports like use namespace\* It is all about class and function name conflicts, nothing else. Did you try to develop a serious php project intensively using namespaces? I did. #25 Posted 07 August 2011 - 03:20 AM It really surprises me to hear that 'using namespace' is not supported in current PHP. Is that so? I'd think a simple define would do the trick then? #26 Posted 09 August 2011 - 09:36 AM Quote use \Yii\ListView Will allow you be "using" namespaces in other namespaces etc etc. But there is a problem. What we all gotta remember is that PHP's namespaces dont work like .net's. PHP's implementation of them is very primative. Let me give you an example. You have a namespace, Yii. Within that namespace you have a sub namespace named ListView. Now to get a function or class within that namepsace you can't autoload the namespace so you have to eager load it into the framework. For example: \Array\Iterate spl_autoload_register will not see the namespace Array only the class (yes CLASS not function) Iterate. It will not autoload functions as well. The whole point of namespaces is to put functions and classes outside of global scope, but if you cannot maintain lazy loaded classes without much more work what is the point? This is a problem! Imagine having to load all your files due to namespaces, suddenly you no longer have a glue framework. Yii, to me, is the framework since it is fast, loading the entire framework every single page will suddenly make it slow. If PHP allowed lazy loading of namespaces then I would say go namespaces. But because they don't namespaces should stay to the app and not its framework. Its framework is too global to be segmented by namespaces. This makes namespaces useless in the framework itself for me, and it is one of the main reasons why I disagree with Lithium. #27 Posted 15 August 2011 - 01:54 AM Sammaye, on 09 August 2011 - 09:36 AM, said: 1. How do you see this technically? First problem is rules for the names-files mapping. "NS=folder" rule is simple and reliable, but not convenient for complex structures similar to Java/.Net Next one: php scripts have the memory cleanup after each call and can't remember the folder structure. Ok, suppose we have some packages and a static map in a config file describing a complex NS hierarchy matching to plain files. 2. Yii supports lazy loading through its import() method for classes. Maybe we just need custom autoload rules support - like user-defined class, function and NS map per files? #28 Posted 15 August 2011 - 02:37 AM Quote Yes that is the way to solve NS loading problems but the first hurdle to lazy loading namespaces is to get the spl_autoload_register to see a namespace when your trying to load it otherwise you have to call it specifically. I suppose this could be solved through the current widget handling and module handling but I am sure there are parts of the framework that would be tiresome to do via predefined functional autoloaders. I just see using a predefined Yii autoloader for everything instead of using the default PHP handler which then Yii autoloader attaches to (via spl_autoload_register()) could make a lot of work on the app end even if your not using namespaces in your app. #29 Posted 12 September 2011 - 12:58 AM #30 Posted 26 October 2011 - 02:23 AM #31 Posted 27 November 2011 - 06:48 AM namespace \Yii\plugins\3rdParty\Janes { class DoFlip { } } namespace \Yii\plugins\3rdParty\Johns { class DoFlip { } } use \Yii\plugins\3rdParty\Janes\DoFlip as JaneForwardFlips use \Yii\plugins\3rdParty\Johns\DoFlip as JohnBackFlips $janeDoFlips = new JaneForwardFlips(); $johnDoFlips = new JohnBackFlips(); Thus, 3rd party plugin developers are able to name the class and not have to worry conflicting class definition. autoloader (used in my PureMVC's PHP port) to load namespaced class: function __autoload( $class ) { //if( stripos( $class, '\\' ) !== false ) if( stristr( $class, '\\' ) ) { $file = APP_DIR.str_replace( '\\', '/', $class ).EXT; if( file_exists( $file ) ) require_once $file; else throw new \Exception( 'Class '.$class.' not found at '.$file ); } else { $_basePaths = array( PMVC_BASE_DIR.'org/puremvc/php/patterns/facade/', PMVC_BASE_DIR.'org/puremvc/php/interfaces/', PMVC_BASE_DIR.'org/puremvc/php/core/', PMVC_BASE_DIR.'org/puremvc/php/patterns/', PMVC_BASE_DIR.'org/puremvc/php/patterns/command/', PMVC_BASE_DIR.'org/puremvc/php/patterns/mediator/', PMVC_BASE_DIR.'org/puremvc/php/patterns/observer/', PMVC_BASE_DIR.'org/puremvc/php/patterns/proxy/' ); $classPaths = array_merge( explode( PATH_SEPARATOR, get_include_path() ), $_basePaths ); foreach( $classPaths as $classPath ) { $file = $classPath.$class.EXT; if( file_exists( $file ) ) { require_once $file; return; } } throw new \Exception( 'Class '.$class.' not found in '.implode( PATH_SEPARATOR, $classPaths ) ); } } #32 Posted 27 November 2011 - 07:11 AM Enjoying Yii? Star us at github Support me so I work on Yii fulltime: #33 Posted 27 November 2011 - 07:21 AM #34 Posted 27 November 2011 - 09:44 AM It's almost funny to me at this point, the idea of even considering not using namespaces. Look around on github and various other frameworks, and see how widespread adaptation of namespaces has become. It's simply considered good practice at this point. And it has taken less than a year to get to this point. PHP 5.3 is available on most servers at this point. On a related note, has anybody else played around with PHP 5.4 yet? I have, and it's pretty awesome. Traits, for one, is a brilliant feature - it basically replaces CBehavior with a native language feature. Given the quick adoption of PHP 5.3, I personally would say, if Yii 2.0 is more than a year from release (as I'm sure it is), that I would like to see Yii 2.0 built for PHP 5.4. By the time Yii 2.0 is released, PHP 5.5 (or 6?) will probably already be out, and Yii 2.0 will start out on a feature set that is already dated. Building something as big as a framework, probably 1-2 years of work (at least), I would strongly recommend targeting the cutting-edge version of the language - by the time you release, this stuff will be considered old news, and your codebase might already look dated at release. Just my opinion... EDIT: the question we really should be asking ourselves is not "should we use namespaces", but rather "what version of PHP should we target?" - and take into account your projected release date. This post has been edited by mindplay: 27 November 2011 - 09:46 AM #35 Posted 27 November 2011 - 12:13 PM Quote I did. Traits aren't able to replace current CBehavior cause of lack of state. They're almost that but what we have is a bit more. Quote Yes, we've considered this and it was discussed at Yii2 forums. No real benefits using 5.4 in the end. Quote That's from scratch but we have a good foundation. Enjoying Yii? Star us at github Support me so I work on Yii fulltime: #36 Posted 17 January 2012 - 05:17 AM is it posible to move this topic to "Design Discussions for Yii 2.0"? #37 Posted 17 January 2012 - 08:59 AM samdark, on 27 November 2011 - 12:13 PM, said: Or less, depending how you look at it. There are very good reasons (by design) why traits don't have, don't need, and should not have state. You already have what you need to maintain state: objects. That's the whole idea of OOP. Introducing another kind of entity also responsible for maintaining some state is impure, and it leads to complexity. Letting your objects maintain their own state makes much more sense. Learn to use PHP traits and think through some examples of Yii behaviors - I'm pretty sure you'll come to see, it makes much more sense when you designed with this kind of clear-cut separation. samdark, on 27 November 2011 - 12:13 PM, said: Really? For one, you would have short array syntax, which would be extremely convenient, given how everything in Yii is configured using arrays. (I realize this is a purely syntactic improvement, but one of the most widely demanded improvements in the history of PHP) For another, well, traits - which, if you understand their justification, will lead to better, cleaner design. Once PHP 5.4 is established and widely available, having a competing custom mechanism for horizontal extensibility will lead to inconsistencies in code, as well as unnecessary learning curve. By the time Yii 2 is ready for prime-time, I'm pretty sure PHP 5.4 will be, too; since it's already out, it has a significant head-start over Yii 2. JSON serialization support, upload progress monitoring and better support for closures, to name a few others. IMHO, anyone who's building a framework from scratch, should build for the latest version of the language. You're building for the future, right? Not for the past - there's enough software for that already! :-) #38 Posted 17 February 2012 - 03:58 PM #39 Posted 17 February 2012 - 05:42 PM phpnode, on 17 February 2012 - 03:58 PM, said: or encourage adoption of PHP 5.4? :-) #40 Posted 18 February 2012 - 03:27 AM A generic framework should try to be usable by as many people as possible and not use any cutting edge features if there's no big advantage. You would only lock out some developers this way.
http://www.yiiframework.com/forum/index.php/topic/14385-namespaces-in-yii-20/page__st__20__p__139290
CC-MAIN-2017-34
refinedweb
1,888
63.8
Re: 11G Statistics Collection Date: Mon, 16 Apr 2012 22:47:22 +0000 (UTC) Message-ID: <jmi7hq$vcm$1_at_solani.org> On Mon, 16 Apr 2012 22:15:49 +0200, Robert Klemme wrote: > I don't understand: if data distribution at time of "nailing" favors one > index while at other times access via this index is slower than FTS how > then would you call that a "working approach" - other than the query > will eventually return (modulo snapshot too old or other issues)? > > Kind regards > > robert Robert, the greatest problem in the system room is called "instability". If one nice morning end users discover that what used to perform well the last night is now horribly slow, the DBA is in trouble and his credibility is undermined. On the other hand, the change because of the volume is never drastic, the change because of data volume is always gradual: there are few blocks more, the query will be a few milliseconds slower. That I can live with. When things slow down over night because of the new statistics, the change is very sudden, as in "sudden death". That I cannot live with. When the performance begins to cause the concern, I will see what I can do, first on the development system, then QA system, then UAT system, using Swingbench, and eventually, the change will get into production. Controlling the change to the production database is one of the two major duties of any DBA. The second is to ensure data availability and never to lose even one bit of data. The change is not good, as PHB would like you believe. The change is only good if its result is predictable. That is not the case with gathering the database statistics. I've seen many situations where insufficiently thought of statistics collection has caused performance deterioration. Collecting statistics is a big change that has potentially an enormous impact on performance. I don't want to collect statistics automatically any more that I want to give my development department access to production. I don't understand your reasoning. Would you allow a duhveloper with a cunning plan of the Baldrick type to implement it in production without the proper QA? If the answer to that question is a resounding "no", why would you allow a potentially disastrous thing like stats collection to happen without QA, automatically? This is something that every DBA should have in his cubicle, to resist an impulse to change things with the weather or new technology: search.dilbert.com/comic/You%20Must%20Learn%20That%20Change%20Is%20Good -- on Mon Apr 16 2012 - 17:47:22 CDT Original text of this message
http://www.orafaq.com/usenet/comp.databases.oracle.server/2012/04/16/0106.htm
CC-MAIN-2018-09
refinedweb
443
60.65
. Reflection: a Review First, a quick review of exactly what reflection is and what it is used for. From Part 1, you know that "Reflection is a means of discovering information about objects at runtime and executing against those objects." This functionality is provided in .NET by the System.Reflection namespace. Classes in this namespace, including Assembly, Module, ConstructorInfo, MethodInfo, and others, are used for type discovery and dynamic invocation. Put simply, they allow you to both explore the classes, methods, properties, and fields (types) exposed by an assembly as well as create an instance of a type and execute one of its methods (invoke members). While these features are great for run-time object discovery, reflection in .NET does not stop there. Reflection also allows you to build assemblies and create entirely new types at run-time. This is known as reflection emit. What Is Reflection Emit? Nested under System.Reflection is the System.Reflection.Emit namespace, which is home for all of the Framework classes that allow you to dynamically build assemblies and types from scratch. Although the very act of generating code on demand is likely a feature that few developers will ever need, it is a credit to the .NET Framework that the tools are there, waiting for the right business problem to solve. Note that reflection emit classes do not generate source code. In other words, your efforts here won't create Visual Basic .NET or C# .NET code. Instead, your reflection emit classes will emit MSIL op codes. As an example, using reflection emit it would be possible to: - Create a new assembly. (Assemblies can be "dynamic" in that they exist only in memory, or can be persisted to disk.) - Within the assembly, create a module. - Within the module, create a type. - Add properties and methods to that type. - Generate the code inside of the method or property. As it turns out this is actually the exact process you will follow when you use the Reflection.Emit classes to generate code. Code Generation Process Following the steps listed above, let's examine the exact operations that are necessary to build out an assembly. For the purposes of this very basic example, let's assume that you want to build a class called MathOps that has one public method (a function). This function will accept two input parameters as integers and return the sum of those two integers. #1: Step 1: Building the Assembly To expand slightly on the steps listed above, the actual process for step 1 looks like this: - Create an AssemblyName. (This is used to uniquely identify and name the assembly.) - Grab a reference to the current application domain. (The app domain will provide the actual method that returns an AssemblyBuilder object.) - Create an AssemblyBuilder instance by calling AppDomain.DefineDynamicAssembly. To start the assembly generation process you first need to create an AssemblyName instance that you'll use to identify your assembly. ' Create a name for the assembly. Dim name As New AssemblyName() name.Name = "MyAssembly" Next, you need an instance of a System.AppDomain class. You can get this from the currently running (static) thread instance. Dim ad As AppDomain ad = Thread.GetDomain() With these two elements in place you can define an AssemblyBuilder class and create an instance of it using the AppDomain and AssemblyName that you previously created. The AssemblyBuilder class is the work-horse of reflection emit. It provides the primary mechanisms you need to create new assemblies from scratch. You also need to specify an AssemblyBuilderAccess enumeration value that will indicate whether you want the assembly written to disk, saved in memory, or both. In this example, you want to save the assembly in memory. Dim ab As AssemblyBuilder ab = ad.DefineDynamicAssembly(name, _ AssemblyBuilderAccess.Run) Step 2: Defining a Module In Step 2 you'll use the ModuleBuilder class to create a dynamic module inside of the assembly that you previously created. ModuleBuilder creates modules inside of an assembly. Calling the DefineDynamicModule method of the AssemblyBuilder object will return an instance of a ModuleBuilder. As with the assembly, you must give the module a name (although here, the name is just a string). Dim mb As ModuleBuilder mb = ab.DefineDynamicModule("MyModule") Step 3: Creating a Type Now that you have an assembly and a module, you can add your class into the assembly. To build a new type (in your case you're building a MyClass type), you use the TypeBuilder class. You'll use a method from the "parent" object to actually return an instance of the builder object. Dim theClass As TypeBuilder theClass = mb.DefineType("MathOps", _ TypeAttributes.Public) Note that you've specified the visibility of the type as public by using the TypeAttributes enumeration. Step 4: Adding a Method With the class type created, you can now add your method to the class. Call the method ReturnSum, and create it as a public function. Use the MethodBuilder class to specify methods for a specific type. You can create a MethodBuilder instance by calling DefineMethod on your previously created type object. DefineMethod expects four parameters: the method, any attributes the method might possess (such as public, private, etc.), parameters, and a return type. Parameters and return type can be void values in the case of a subroutine. In your case you're creating a function so you need to specify both the parameters and return type values. To specify the return type, create a type object that holds a return type value (a System.Int32 value). Dim retType As Type retType = GetType(System.Int32) You'll use an array of type values to specify the parameters to the function. Both of the parameters are int32 values as well. Dim parms As Type() parms(0) = GetType(System.Int32) parms(1) = GetType(System.Int32) With these items in hand you can now call DefineMethod. Dim mb As MethodBuilder = _ theClass.DefineMethod("ReturnSum", _ methodAttributes.Public, retType, parms) #2: Step 5: Generating Code Since you have, in essence, stubbed out the function in step 4, you now need to add the actual code body to the method. This is really the core of the code generation process with reflection emit. It is important to note that reflection emit classes do not generate source code. In other words, your efforts here won't create Visual Basic .NET or C# .NET code. Instead, your reflection emit classes will emit MSIL op codes. MSIL (Microsoft Intermediate Language) is an intermediate code language that closely resembles assembler. MSIL is consumed by the .NET JIT compiler when it creates native binaries. Op codes are low-level, assembler-like operating instructions. Consider the following implementation of ReturnSum: Function ReturnSum(ByVal val1 As Integer, _ ByVal val2 As Integer) As Integer Return val1 + val2 End Function If you want to emit this code, you would first need to figure out how to write this function using only MSIL op codes. Thankfully, there is a quick and easy way to do this. You can simply compile the code and examine the resulting assembly using the .NET Framework ILDASM.exe utility. Compiling the function above yields the following MSIL version: .method public instance int32 ReturnSum(int32 val1, int32 val2) cil managed { // Code size 9 (0x9) .maxstack 2 .locals init ([0] int32 ReturnSum) MathOps::ReturnSum To emit this code you will use the ILGenerator class. You can retrieve an instance of the ILGenerator class specific to your method by calling the MethodBuilder.GetILGenerator() method. In other words, the following code will obtain an ILGenerator instance for your ReturnSum method. Dim gen As ILGenerator gen = mb.GetILGenerator() Using the generator object you can inject op code instructions to the method. gen.Emit(OpCodes.Ldarg_1) gen.Emit(OpCodes.Ldarg_2) gen.Emit(OpCodes.Add_Ovf) gen.Emit(OpCodes.Stloc_0) gen.Emit(OpCodes.Br_S) gen.Emit(OpCodes.Ldloc_0) gen.Emit(OpCodes.Ret) At this point you've essentially finished creating the function, class, module, and assembly. To get a reference to this class you can make one call to CreateType like this: theClass.CreateType() #3: Namespaces and Classes As you've already seen, the Reflection.Emit namespace contains a core set of "builder" classes that you use to create types and the various attributes, methods, fields, properties, and so on that are associated with the new type. Table 1 describes the primary classes you'll use when emitting code with reflection. Coupling Emit and Dynamic Invocation Now that you know how to emit a dynamic assembly using the Reflection classes, let's integrate the concept of dynamic invocation (covered in Part 1 of this article). One example of when to use Reflection and Reflection.Emit is with code or script evaluation at run-time. It would be possible, for instance, to display a Windows form with a text box, ask the user to type in a formula, and then evaluate that formula at run-time through compiled code. Another time to use Reflection.Emit classes is to optimize performance. Code solutions, by design, tend to be generalized solutions to a problem. This is often a good thing from a design perspective because it makes your systems flexible. For instance, if you want to compute the sum of numbers and you don't necessarily know at design time how many values you need to sum, then you may need to call a loop. If you rewrite the ReturnSum function to accept an array of integers, you could loop through the members of that array, add each to a counter value, and then return the sum of all of the values. This is a nice generic solution since it doesn't care about the total number of values contained in the array. Public Function ReturnSum(ByVal values() _ As Integer) As Int32 Dim i As Int32 Dim retValue As Int32 For i = 0 To values.GetUpperBound(0) - 1 retValue = retValue + values(i) Next Return retValue End Function On the other hand, if you hard-code the array limit you can return the sum in a more optimized fashion by writing one long math operation statement. For a few values or even hundreds of values the difference would be negligible. But, if you're dealing with thousands or millions of values, the hard-coded method will be much, much faster. In fact, you can make this solution even faster by pulling out the array values and summing them with straight number addition while at the same time eliminating any 'zero' values that won't affect the result anyway: Public Function ReturnSum() As Int32 Return 9 + 32 + 8 + 1 + 2 + 2 + 90 '... End Function The problem here, of course, is that you've written code that is not general and is not flexible. So how do you get the best of both worlds? Answer: Reflection.Emit. By combining Reflection.Emit functionality (such as taking in an array ceiling and array values, and then compiling the code), and Reflection functionality (such as locating, loading, and running the assembly that was emitted), you can craft some pretty ingenious performance solutions while avoiding brittle code. In this simple case you could write a loop that generates the MSIL op codes you need. Consider the following console application that consumes an array and creates a new assembly, module, class, and ReturnSum method that directly sums the array values without resorting to a loop. The code in Listing 1 showcases many of the same concepts you've already seen in terms of using the emit "builder" objects. Please note some new code that is introduced with the application: Activator.CreateInstance is used to create an instance of the newly created object type, and the InvokeMember method is used to call the ReturnSum method on the type. Summary Reflection and Reflection.Emit namespaces allow programmers to dynamically generate and execute managed code at run-time. They provide a highly specialized set of classes for dealing with a uniquely specialized set of business problems. You can download the code for this article at.
https://www.codemag.com/article/0301051
CC-MAIN-2019-26
refinedweb
2,001
56.86
Every explains what the different return codes mean in libCouchbase derived clients (Ruby, PHP, Python, C/C++) and what your app should do when it encounters one. Couchbase clients are smart. This means that they are cluster topology aware – if a node fails, the client is aware of this. If a new node is added to the cluster, the client is also aware of this new node. Compared to memcached clients, smart clients provide better performance and availability during failures by automatically routing requests to the correct Couchbase server. Smart clients are either derived from libcouchbase (Ruby, Python, Perl, node.js and PHP) or are written in a native language such as Java and .NET. This blog will cover error codes returned by libcouchbase clients. Applications use smart client API’s to send requests to Couchbase. Couchbase listens on 4 ports – 11210 (direct port), 11211 (proxy port), 8091(Admin console), and 8092 (couchbase views). Once connected, clients can send a stream of requests on the same connection to the server. Request messages typically include a memcached binary header of 24 bytes and a payload that is optional. The length of the payload is specified in the header. Document mutations first go to the object managed cache and then are replicated and persisted asynchronously. Couchbase returns a success (0) or failure (non-zero error code) as soon as the document is added to the object managed cache. The return code indicates the overall success or failure of the API call. It is a good programming practice to check error codes in your application so that you know when retry the request or throw a user error. The table below list error codes in Couchbase and suggests what clients should do if they encounter an error that is listed : If you’re using a client derived from libCouchbase such as Ruby, Python, Perl, node.js and PHP, following are some of the error codes you might see: After receiving an error code from Couchbase, applications can convert these codes into a string – as shown in the example code snippet below, lcb_strerror() returns a 0-terminated string representing the error. #include <libcouchbase/couchbase.h> No matter how confident you are, you cannot predict every problem you might can encounter. Without a mechanism to check for errors, you might think that Couchbase does not work properly. Make sure you check for error codes in your app so that you know why errors occur. Good luck building your apps on Couchbase! One Comment […] my previous blog article, we looked at the errors thrown by libCouchbase clients such as ruby, python, C and C++. This blog […]
https://blog.couchbase.com/handling-runtime-errors-ruby-python-and-c-clients/
CC-MAIN-2017-30
refinedweb
440
61.77
Hey i dont know if i should open another thread or just post my question here. But oh well here u go. I am also doing Tkinter at the moment and i am just reading for the most part. But i dont understand how to make a function and then the variables assigning. For example: ###This is a script i got from a website. from Tkinter import * class Application(Frame): def say_hi(self): print "hi there, everyone!" def say_bye(self): print "Bye everyone." def createWidgets(self): self.QUIT = Button(self) self.QUIT["text"] = "QUIT" self.QUIT["fg"] = "red" self.QUIT["bg"] = "blue" self.QUIT["command"] = self.quit self.QUIT.pack({"side": "left"}) self.hi_there = Button(self) self.hi_there["text"] = "Hello", self.hi_there["command"] = self.say_hi self.hi_there.pack({"side": "left"}) def __init__(self, master=None): Frame.__init__(self, master) self.pack() self.createWidgets() app = Application() app.mainloop() ### These functions do not have anything to assign but how would i be able that the user assigns them and then I apply them to my function. For example: class Plus: def plusx(x,y): print x+y # How would i assign these variables def __init__(self): ... Edit/Delete Message
http://forums.devshed.com/python-programming/229950-tkinter-last-post.html
CC-MAIN-2017-30
refinedweb
197
63.46
Chart::XMGR - object for displaying data via XMGR haven't worked out how to do it anyway!). This means that only TYPE XY is supported. For anonymouse pipe 3 or more columns can be supplied along with the graph type. The default option is to use the named pipe. The following drawing options are available: The options are case-insensitive and minimum match. Controls the linestyle. Allowed values are none(0), solid(1), dotted(2), dashed(3), dotdash(4). Default is solid. Controls the line colour. LINECOLOR is an allowed synonym. Allowed values are white, black, red, green, blue, yellow, brown, gray, violet, cyan, magenta, orange, indigo, maroon, turqse and green4 (plus numeric equivalents: 0 to 15). Default is black. Width of the line. Default is 1. Governs whether the area inside the line is filled. Default is none. Governs symbol type. 46 different types are supported (see XMGR options for the mapping to symbol type). Basic symbol types are available by name: none, dot, circle, square, diamond, triangleup, triangleleft, triangledown, triangleright, triangledown, triangleright, plus, X, star. Default is circle. Colour of symbol. See LINECOLOUR for description of available values. Default is red. Symbol size. Default is 1. Governs whether symbols are filled (1), opaque (0) or have no fill (none(0)). Default is filled. Set whether to autoscale as soon as set is drawn. Default is true. Type of data set. Allowed values as for XMGR: XY, XYDX, XYDY, XYDXDX, XYDYDY, XYDXDY, XYZ, XYRT. A simplified non-object oriented interface is provided. These routines are exported into the callers namespace by default. A simplified interface to plotting on Xmgr. Select the current set (integer 0->). Detach XMGR from the pipe. This returns control of XMGR to the user. Print arbritrary commands to XMGR. The @ symbol is prepended to all commands and a newline is appended. The following methods are available. Constructor. Is used to launch the new XMGR process and returns the object. Return file handle (of type IO::Pipe) associated with external XMGR process. Return options object associated with XMGR object. Returns name of pipe associated with object. Returns whether an XMGR process is currently attached to the object. Can also be used to set the state. Returns (or sets) the current set. Returns (or sets) the current graph. Turns the debug flag on or off. If on (1) all commands sent to XMGR are also printed to STDOUT. Default is false. Method to prt Xmgr commands to the attached XMGR process. Carriage returns are appended. '@' symbols are prepended where necessary (needed for anonymous pipes, not for named pipes). Print numbers to the pipe. No @ symbol is prepended. For named pipes we must use the POINT command. For anonymous pipes the data is just sent as is (so multiple column can be supplied). Selects the current graph in XMGR. Method to plot XY data. Multiple arguments are allowed (in addition to the options hash). This routine plots the supplied data as the currently selected set on the current graph. The interpretation of each input argument depends on the set type (specified as an option: SETTYPE). For example, 3 columns can be translated as XYDY, XYDX or XYZ. No check is made that the number of arguments matches the selected type. Array references can be substituted for PDLs. The options hash is assumed to be the last argument. $xmgr->plot(\@x, \@y, { LINECOL => 'red' } ); $xmgr->plot($pdl); Process the options hash and send to XMGR. This sends the options for the current set. Forces XMGR to redraw. Kill a set. If no argument is specified the current set is killed; else the specified set (integer greater than or equal to 0) is killed. Instruct XMGR to autoscale. Autscale on the specified set (default to current set). Set the world coordinates. Set the current graphs viewport (where the current graphi is displayed). Set the graphtype. Allowed values are: 'XY', 'BAR', 'HBAR', 'STACKEDBAR', 'STACKEDHBAR', 'LOGX', 'LOGY', 'LOGXY', 'POLAR', 'SMITH' Configure the current set. $xmgr->configure(SYMBOL=>1, LINECOLOUR=>'red'); Close the pipe without asking XMGR to die. This can be used if you want to leave XMGR running after the object is destroyed. Note that no more messages can be sent to the XMGR process associated with this object. Destructor for object. Sends the 'exit' message to the XMGR process. This will fail silently if the pipe is no longer open. (eg if the detach() method has been called. An example program may look like this: use Chart::XMGR; use PDL; $a = pdl( 1,4,2,6,5); $xmgr = new Chart::XMGR; $xmgr->plot($a, { SYMBOL => 3, LINECOL => 'red', LINESTYLE => 2 SYMSIZE => 1 } ); $xmgr->configure(SYMCOL => 'green'); $xmgr->detach; If PDL is not available, then arrays can be used: use Chart::XMGR @a = ( 1,4,2,6,5 ); $xmgr = new Chart::XMGR; $xmgr->plot(\@a, { SYMBOL => 3, LINECOL => 'red', LINESTYLE => 2 SYMSIZE => 1 } ); $xmgr->configure(SYMCOL => 'green'); $xmgr->detach; The XMGR home page is at This modules is designed to be used with XMGR version 4 (tested on v4.1.2). This module will probably not work with GRACE (XMGR version 5). The PDL::Options module is required. This is available as part of the PDL distribution () or separately in the PDL directory on CPAN. For versions 0.90 and earlier this module was known by the name Graphics::XMGR (not on CPAN). Copyright (C) Tim Jenness 1998,1999 (t.jenness@jach.hawaii.edu). All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~tjenness/Chart-XMGR-0.95/XMGR.pm
CC-MAIN-2015-22
refinedweb
926
70.39
At work our ISP has given us a block of IPs and I was wondering how I go about mapping them. Currently out setup is that the ISPs fiber goes into out router which is then routed throughout the office. They have given us the following information... new route x.x.x.x/28 Subnet is: x.x.x.x/28 next-hop x.x.x.x Usable IP Range: x.x.x.x - x.x.x.x Gateway: x.x.x.x Broadcast: x.x.x.x Subnetmask: 255.255.255.240 I was wondering how I go about mapping this information to our router. I'm not expecting anyone to give me a step by step guided tour but what information I should be looking to set on the router's configuration. Once setup is it possible to allow a computer on the network behind the firewall to utilize all of these IPs for binding requests to? EDIT: The end goal is to have one server that can bind outgoing requests to different IPs assigned to me from the ISP. One and only one server needs this and it will be behind the router. It's currently running Windows Server 2008. It still needs to access internal network resources. So I don't need a specific IP to be routed to a computer internally. I just want to be able to bind a request to one of the IPs given to me from my ISP. The router in question is a Cisco/Linksys RV016. You can find firmware emulator here... ui.linksys.com/files/RV016/1.2.3 It sounds like you're looking to use one of your static IPs as the main public IP that outbound Internet traffic for your office uses, and then map a separate one of your public static IPs for use by the server? If that is the case, you'll just want to create a "one-to-one" NAT entry in the firewall. Using the emulator link you provided, it would be under the Setup -> One-to-One NAT. This basically associates one of the external IPs to an internal IP address. Assuming your ISP is providing you a "bridged" connection (so that you can assign the public IP info directly to your firewall), you'll basically plug in the IP info into your firewall. Assuming you can access the Internet, the router will typically route all traffic outbound via the IP address you assign to it. If you have one-to-one NAT enabled, those hosts should communicate via a secondary IP. Does that answer your question? Assuming you are given 10.1.1.0/28 by your ISP, (used 0 to save calculation overhead :)) with gateway as 10.1.1.1 and broadcast 10.1.1.15 and netmask as 255.255.255.240. You can give IP address 10.1.1.2 (the first available IP to router) and set default route of router to the gateway, that is 10.1.1.1. This would allow router to access internet. Trying pinging IPs like 4.2.2.2 from router and you should be able to see response. Once that is done you can configure NAT on router so that requests for IP 10.1.1.2 get forwarded to internal IP of the server. This is too vague you would have to learn how to do above things Well first the router is going to need an IP address, so you should configure it for a static IP. I would recommend you keep NAT going for security reasons, and then if you need to forward outside traffic, set up a pinhole through NAT. Your router should let you map IP's to inside NAT addresses if you need multiple services (like RDP) going to multiple clients. Otherwise you would set up the router just as a gateway and either set up DHCP to assign those addresses out or set up internal static IP's on the machines with the router as the gateway. Are these bits of a Cisco config of any use to you at all? ! ip dhcp pool <pool name> import all network 10.0.0.0 255.255.0.0 dns-server <dns servers> default-router 10.0.0.1 lease 0 2 ! ! ip domain lookup source-interface Dialer0 ip name-server <ns1> ip name-server <ns2> interface ATM0 no ip address no atm ilmi-keepalive dsl operating-mode ansi-dmt ! interface ATM0.1 point-to-point pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! ! ! interface Dot11Radio0 no ip address ip nat inside ip virtual-reassembly ! ssid <wireless name> vlan 10 authentication open guest-mode ! speed basic-1.0 2.0 5.5 6.0 9.0 11.0 12.0 18.0 24.0 36.0 48.0 54.0 no cdp enable ! interface Dot11Radio0.1 encapsulation dot1Q 1 native ip address 10.0.0.1 255.255.0.0 ip nat inside ip virtual-reassembly no cdp enable ! interface Dialer0 ip address negotiated ip nat outside ip virtual-reassembly encapsulation ppp dialer pool 1 dialer-group 1 no cdp enable ppp authentication chap callin ppp chap hostname <chap username> ppp chap password 7 <chap password> ! ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ! ip nat pool NAT_GROUP <starting IP> <ending IP> netmask 255.255.255.240 ip nat inside source list 10 pool NAT_GROUP ! access-list 10 permit 10.0.0.0 0.0.255.255 dialer-list 1 protocol ip permit This is the config used for an ADSL router that has been allocated a bunch of public IP addresses. These addresses are then NAT'ed 1:1 to an address within the 10.0.0.0/16 range on the wireless interface This may, or may not, give you something to start with. Not knowing the make of router in use, it's hard to tell :) From your question it looks like you are not 100% familiar with IP addressing. (Or I did get you wrong.) You got only one network from your ISP. So those IP addresses have to be within the same Layer 2 domain. But what you want is to have different networks. Like this: internal network -> Firewall -> external network 1 -> Router -> external network 2 -> Internet The external network 2 is the one provided by the ISP. If you want to put the firewall within this network then you have to place a switch in front of the router. And connect the firewall directly to it. But then is the question why you still need the router. If you want to have the router in there you will need to either use NAT (to hide external network 1, which will then become an internal network) or you have to ask you ISP to give you a second IP range. To clarify my wordings: internal network = private IP addresses/range external network = public IP addresses/range And the last question: With firewall do you mean the server you are mentioning later? Or how many devices should have public IP addresses at the end? i have a simple sloution if you are given a pool of 8 ips, six of them are useable for you, like if u r given xxx.xxx.xxx.76-xxx.xxx.xxx.83 xxx.xxx.xxx.76 and xxx.xxx.xxx.83 are not useable for u. if u have a ppoe connection with isp...configure ur router like this connection type: PPOE username: xxxxxxxxxxx Password: xxxxxxxxxx LAN Configuration LAN IP: xxx.xxx.xxx.77 Subnetmask: xxx.xxx.xxx.xxx DHCP: Disabled now take a wire out from ur router and insert it in a lan card of ur server and gv that lan card ip IP xxx.xxx.xxx.79 Subnet: xxx.xxx.xxx.xxx Gateway: xxx.xxx.xxx.77 DNS: xxx.xxx.xxx.77 DNS2: (ur isp's DNS) if required same ur can use ur rest of ips.......if ur like..... By posting your answer, you agree to the privacy policy and terms of service. asked 6 years ago viewed 7836 times active 4 years ago
http://serverfault.com/questions/47584/how-to-setup-ip-range-given-by-isp/47802
CC-MAIN-2015-35
refinedweb
1,363
74.9
Hi experts, I have confusion about the point I read in my book: I. In a source file you can define number of classes, but only one of them can be a public class. In this case, the name of source file must match the name of public class. II. In a source file, you can define number of classes with default access specifier. As I understand, one class must be public that is startup class and one or more class can be package(default). Question 1. Suppose I have coded two classes in a source file, say in package X. And I want to use both of them in package Y. as far as I know, to use in outside package we have to put classes in access specifier public! Should I have to put both the classes in different source file or what? Thanks in advance for your cooperation all the way… Jiten
https://www.daniweb.com/programming/software-development/threads/271870/confusion-access-specifier
CC-MAIN-2017-04
refinedweb
155
81.22
(From MarkMail, a database of some 60 million messages, with enhanced search capabilities driven by semantics. (The third demo was a built-from-the-ground-up semantic application). Since then, several folks have asked about the code behind the demo. What I showed was a fully operational MarkMail instance, including millions of email messages. This was understandably quite expensive to keep up on AWS, and it went away shortly after the keynote. A huge part of the demo was showing operation at scale, but reading between the lines, what folks are more interested in is something more portable–a way to see the code in operation and play with it without having to stand up an entire cluster or go through a lengthy setup procedure. Space won’t allow for a full semantics tutorial here. For that, a good resource is this free tutorial from MarkLogic University. So, in this posting, let’s recreate something on a similar level using built-in features. We’ll use the Oscars sample application that ships with the product. To get started, create an Application Builder sample project and deploy it. We’ll call the relevant database names ‘oscar’ and ‘oscar-modules’ throughout. Since Application Builder ships with only a small amount of data, you may also want to run the sample Information Studio collector that will fetch the rest of the dataset. Before we can query, we need to actually turn on the semantics index. The easiest place to do this is on the page at. Select the oscar database and hit configure. On the page that comes up, tick the box for Semantics and wait for the yellow flash. Semantic Data This wouldn’t be much of a semantic application without triple data. Entire books have been written on this kind of data modeling, but one huge advantage of semantics is that there’s lots of data already set up and ready to go. We’ll use dbpedia. The most reecent release as of this writing is version 3.9. From there we’ll grab data that looks relevant to the Oscar application: anything about people and/or movies, picking and choosing from things most likely to have relevant facts: In all, a bit less over 38 million triples–not even enough to make MarkLogic break a sweat, but still a large enough chunk to be inconvenient to download. Since the oscar data and dbpedia ultimately derive from the same source–Wikipedia itself. Since the oscar data preserved URLs it was straightforward to extract all triples that had a matching subject, once prefixed with “”. I extracted all these triples: Grab them from here and put it somewhere on your local system. Then simply load these triples via query console. Point the target database to ‘oscar’ and run this: import module namespace sem="" at "MarkLogic/semantics.xqy"; sem:rdf-load("/path/to/oscartrips.ttl") Infopanel widget So an ‘infopanel’ is what in the MLW demo showed the Hadoop logo, committers, downloads, and other facts about the current query. The default oscar app already has something like this: widgets. Let’s create a new widget type that looks up and displays facts about the current query. To start, if you haven’t already, build the example application in App Builder. There’s some excellent documentation that walks through this process. Put on your Front End Dev hat and let’s build a widget. All the code we will use and modify is in the oscar-modules database, so either hook up a WebDav server or copy the files out to your filesystem to work on them. Back in AppBuilder on the Assemble page, click the small X at the upper-right corner of the pie chart widget. This will clear space for the widget we’re about to create, specifically in the div <div id=”widget-2″ class=”widget widget-slot”>. The way to do this is to modify the file application/custom/app-config.js All changes to files in the custom/ directory will survive a redeployment in AppBuilder, which means your changes will be safe, even if you need to go back and change things in Application Builder. function infocb(dat) { $("#widget-2").html("<h2>Infopanel</h2><p>The query is " + JSON.stringify(dat.query) + "</p>"); }; var infopanel = ML.createWidget($("#widget-2"), infocb, null, null); This gives us the bare minimum possible widget. Now all that’s left is to add semantics. Hooking up the Infopanel query We need a semantic query, the shape of which is: “starting with a string, find the matching concept, and from that concept return lots of facts to sift through later”. And we have everything we need at hand with MarkLogic 7. The REST endpoint, already part of the deployed app, includes a SPARQL endpoint. So we need to make the new widget fire off a semantic query in the SPARQL language, then render the results into the widget. One nice thing about the triples in use here is that they consistently use the foaf:name property to map between a concept and its string label. So pulling all the triples based on a string-named topic works like this. Again, we’ll use Query Console to experiment: import module namespace sem = "" at "/MarkLogic/semantics.xqy"; let $str := "Zorba the Greek" let $sparql := " prefix foaf: <> construct { ?topic ?p ?o } where { ?topic foaf:name $str . ?topic ?p ?o . } " return sem:sparql($sparql, map:entry("str", $str)) Here, of course, to make this Query Console runnable we are passing in a hard-coded string (“Zorba the Greek”) but in the infopanel this will come from the query. Of course, deciding what parts of the query to use could be quite an involved process. For example, if the query included [decade:1980s] you can imaging all kinds of interesting semantic queries that might produce useful and interesting results. But to keep things simple, we will look for only a s single word query, which includes quoted phrases like “Orson Welles”. Also in the name of simplicity, the code sample will only use a few possible predicates. Choosing which predicates to use, and in what order to display them, is a big part of making an infopanel useful. Here’s the code. Put this in config/app-config.js: function infocb(dat) { var qtxt = dat.query && dat.query["word-query"] && dat.query["word-query"][0] && dat.query["word-query"][0].text && dat.query["word-query"][0].text._value if (qtxt) { $.ajax({ url: "/v1/graphs/sparql", accepts: { json:"application/rdf+json" }, dataType: "json", data: {query: 'prefix foaf: <> ' + 'construct { ?topic ?p ?o } ' + 'where ' + '{ ?topic foaf:name "' + qtxt + '"@en . ' + '?topic ?p ?o . }' }, success: function(data) { var subj = Object.keys(data); // ECMAscript 5th ed, IE9+ var ptitle = ""; var pdesc = ""; var pthumb = ""; var title = "-"; var desc = ""; var thumb = ""; if (data[subj]) { if (data[subj][ptitle]) { title = data[subj][ptitle][0].value; } if (data[subj][pdesc]) { desc = "<p>" + data[subj][pdesc][0].value + "</p>"; } if (data[subj][pthumb]) { thumb = "<img style='width:150px; height:150px' src='" + data[subj][pthumb][0].value + "'/>"; } } $("#widget-2").html("<h2>" + title + "</h2>" + desc + thumb ); } }); } else { $("#widget-2").html("no data")} }; var infopanel = ML.createWidget($("#widget-2"), infocb, null, null); This works by crafting a SPARQL query and sending it off to the server. The response comes back in RDF/JSON format, with the subject as a root object in the JSON, and each predicate against that subject as a sub-object. The code looks through the predicates and picks out interesting information for the infopanel, formatting it as HTML. I noted in working on this that many of the images referenced in the dbpedia image dataset actually return 404 on the web. If you are not seeing thumbnail images for some queries, this may be why. An infopanel implementation can only be as helpful as the data underneath. If anyone knows of more recent data than the official dpbedia 3.9 data, do let me know. Where to go from here I hope this provides a base upon which many developers can play and experiment. Any kind of app, but especially a semantic app, comes about through an iterative process. There’s a lot of room for expansion in these techniques. Algorithms to select and present semantic data can get quite involved; this only scratches the surface. The other gem in here is the widget framework, which has actually been part of all Application Builder apps since MarkLogic 6. Having that technology as a backdrop made it far easier to zoom in and focus on the semantic technology. Try it out, and let me know in the comments how it works for you.
http://dubinko.info/blog/2016/01/
CC-MAIN-2019-39
refinedweb
1,442
64.3
#include <string.h> The output_string class is used to represent the processing required to send output to a string. Definition at line 29 of file string.h. Reimplemented from output. Definition at line 33 of file string.h. [virtual] The destructor. Definition at line 22 of file string.cc. [private] The constructor. It is private on purpose, use the create class method instead. Definition at line 27 of file string.cc. The default constructor. Do not use. The copy constructor. Do not use. [static] The create class method is used to create new dynamically allocated instances of this class. Definition at line 35 of file string.cc. [protected, virtual] The get_column method is used to obtain the current output column position. Implements output. Definition at line 79 of file string.cc. The get_line_width method is used to obtain the width of lines in the output. This may vary depending on what filtering is present. Definition at line 71 of file string.cc. The mkstr method is sued to obtain the value of the output as a string. Definition at line 63 of file string.cc. The assignment operator. Do not use. The put method is used to write a character to the output file. Each derived class is required to override this with their own implementation. Definition at line 42 of file string.cc. The acc instance variable is used to remember the text of the output. Definition at line 84 of file string.h. The column instance variable is used to remember the present column position of the write location. Definition at line 78 of file string.h. The linlen instance variable is used to remember the maximum desired output line width. Definition at line 72 of file string.h.
http://nis-util.sourceforge.net/doxdoc/classoutput__string.html
CC-MAIN-2018-05
refinedweb
291
71.41
By Jim Sukha (Intel), Added Last time, I introduced DotMix, a deterministic parallel random-number generator (DPRNG) for Intel® Cilk™ Plus. To briefly summarize the last post, DotMix is a new contribution that you can download from our Contributed Code section. I previously mentioned two advantages that DotMix has over a traditional serial random-number generator (RNG) such as rand(): - DotMix avoids some of the scalability bottlenecks of a traditional serial RNG, and - DotMix generates random numbers deterministically for most Cilk Plus programs when using the same seed and program input. Today, I will discuss these advantages in greater detail, and explain why the problem of deterministic random-number generation for Cilk Plus turns out to be more complicated than one might initially expect. Origins of DotMix DotMix is a code developed by Tao B. Schardl and myself, with input from Charles E. Leiserson. DotMix comes out of some recent research done at MIT. For those of you who are not familiar with the history of Cilk, Cilk Plus is a descendant of the Cilk project from MIT in the mid 90's. Since that time, there has been a long tradition of Cilk-related research coming out of MIT, particularly from the SuperTech research group, headed by Charles E. Leiserson. In general, at any point in time, there are a number of ongoing research projects relevant to the Cilk Plus community, happening both at MIT and elsewhere. But I am able to tell you the most details about DotMix, since I had the opportunity to participate in this particular research effort. Before diving in any further, it is useful for me to give you a bit of background about myself. My day job is Intel® Cilk™ Plus runtime developer, which means I am one of a number of Intel employees whose job it is to make sure that Cilk Plus works as it should, and to help fix it when it doesn’t. Before joining Intel however, I was an MIT student, member of the SuperTech group, and a frequent user of Cilk technologies. Thus, you could say that one of my long-time hobbies is writing Cilk code and experimenting with new ways to make Cilk better. In today’s post, I'll remove my Intel® Cilk™ Plus runtime developer hat, put on my Cilk researcher and individual contributor hat, and tell you the story of DotMix. To the best of my knowledge, the research into what eventually led to DotMix began sometimes around 2010, while I was a student at MIT. As those of you who have worked on similar projects may be aware, it is often the case with any kind of research that the origins of any particular idea or research topic are often impossible to attribute to any one person or event. Moreover, the journey towards a solution one takes towards a solution is inevitably more convoluted than it “should have been” in hindsight. Nevertheless, for the purposes of telling this story, I will describe a relatively straightforward path to the solution we (my coauthors at MIT and I) ended up with. I will also pretend that our investigations began with a variant of our favorite Cilk program: fib. For the latter, I suspect that I'm not really stretching the truth too far. :) In particular, we started with (the conceptual equivalent of) the following Cilk Plus program. #include <cstdio> #include <cstdlib> #include <cilk/cilk.h> #include <cilk/reducer_opadd.h> // Global reducer to add up random numbers. cilk::reducer_opadd<uint64_t> sum(0); int rand_fib(int n) { if (n < 2) { // Generate a random number, add it to sum. sum += rand() * n; return n; } else { int x, y; sum += rand() * n; x = cilk_spawn rand_fib(n-1); sum += rand() * n; y = rand_fib(n-2); cilk_sync; sum += rand() * n; return x+y; } } int main(void) { int n = 30; int ans = rand_fib(n); std::printf("fib(%d) = %d\n", n, ans); std::printf("sum is %llu\n", sum.get_value()); return 0; } In this program, the rand_fib() method computes the nth Fibonacci number, but also uses a (pseudo)random-number generator (RNG) to generate a random number at various points in the computation. These random numbers are multiplied by the current value of n, and added together into the reducer sum. We would like this program to run efficiently in parallel. After all, the whole point of parallelizing a program is to make it run faster! On the other hand, we might also want the program to generate deterministic output, that is, output the same final sum on each run of the program. Using a typical implementation of rand(), this program has deterministic output for a serial execution, since rand() generates a deterministic stream of random numbers. For a parallel execution of this program, however, we don't always get deterministic behavior. As an aside, the multiplication of the random number by n is significant here. If we simply added all the random numbers together, we might still get deterministic output because integer addition is commutative! I'm sure many of you have experienced firsthand the frustration that stems from trying to track down a bug in a program that you can't consistently reproduce because the program behaves differently every time you run it. All other things being equal, for a parallel execution it would be nice to have a deterministic parallel random-number generator — an RNG that generates the same random numbers each time you run the code, even in parallel. Using such an RNG eliminates a source of nondeterminism in a parallel program, which I assert makes parallel programming easier. From this point forward, I'm going to use the acronym defined earlier: DPRNG. I claim this acronym rolls off the tongue a bit more easily than "deterministic parallel random-number generator." :) Can we achieve both good performance and deterministic execution? As we thought about the problem, we realized that achieving both goals simultaneously was trickier than we first thought. Stream RNGs and Parallel Loops To understand some of the issues, it is helpful to review how a typical RNG such as rand() is implemented. RNGs like rand() are typically stream RNGs — RNGs designed to output a stream of numbers, one at a time, based on the generator's internal state. Seeding the RNG initializes the RNG’s internal state. After every call to rand(), the RNG transforms this state, and then uses it to output the next number in the stream. We can be slightly more mathematical in this description. Let xi be the internal state of the RNG after the ith call to rand(), and suppose each call to rand() uses some function f to transform the internal state. Then, a sequence of n calls to the RNG transforms the internal state into xn as follows: x0 x1 = f(x0) x2 = f(x1) = f(f(x0)) x3 = f(x2) = f(f(f(x0))) x4 = f(x3) = f(f(f(f(x0)))) ... xn = f(xn-1) = f(n)(x0) Over the years, many people have put significant effort into designing good RNGs, and the functions used as f can get pretty complicated. I won't pretend I understand enough about them to explain all the details. Fortunately, understanding the exact function f is not actually important for our discussion. The important thing that the math tells us is that for a generic f, there is a serial chain of dependencies to compute xn from x0. With that fact in mind, let's consider what happens when we use a stream RNG in a simple parallel loop. In the following program, each iteration of the parallel loop generates a random number and uses it to update a reducer sum. #include <cstdio> #include <cstdlib> #include <cilk/cilk.h> #include <cilk/reducer_opadd.h> int main(void) { cilk::reducer_opadd<int> sum(0); int n = 1000000; cilk_for(int i = 0; i < n; ++i) { sum += (i+1) * rand(); } std::printf("Final sum = %d\n", sum); return 0; } How does this program behave? One of two scenarios seems likely, depending on the specific implementation of rand(). - Suppose the rand()implementation uses a single stream RNG for all worker threads. In this case, multiple worker threads in Cilk are trying to manipulate the RNG's internal state concurrently to generate a random number. Consider two subcases. - If the implementation of rand()is not properly synchronized, then there is a race condition, since multiple worker threads are trying to changes the RNG's internal state concurrently. According to the manpage on my Linux desktop, the function rand()is not thread-safe, so technically the original rand_fib()function has a bug! - If the implementation of rand()were thread-safe, e.g., the internal state is protected by a lock, then the program should run correctly. But in a parallel execution, the internal RNG state that iteration isees for its call to rand()depends on the order that the iterations manage to acquire the lock, and this order is likely to vary between program runs. Thus, this approach does not guarantee deterministic output. cilk_forloop above is likely to exhibit in either no speedup or even slowdown when executed using more than one worker. - Suppose the rand()implementation maintains a separate stream RNG for each worker thread, for example, in thread-local storage. This implementation avoids the performance bug of the previous scenario, since each thread has its own private RNG state. But it still has the problem that the random numbers that are generated are nondeterministic, since the runtime scheduler may execute a given iteration of the cilk_for loop on a different worker between different runs. To give a concrete example, in two consecutive runs of the same Cilk Plus program on 3 workers, the mapping of iterations to worker threads could be: Run 1: Iterations 0 to 499999 on worker 0 Iterations 750000 to 999999 on worker 1 Iterations 500000 to 749999 on worker 2 Run 2: Iterations 0 to 249999 on worker 0 Iterations 500000 to 749999 on worker 1 Iterations 750000 to 999999, 250000 to 499999 on worker 2 The nondeterminism in the second case comes from the combination of the dynamic load balancing with thread-local storage. In particular, the Cilk Plus runtime uses a randomized work-stealing scheduler, where one worker can steal work from another when it runs out of work to do. This scheme has the advantage of efficient scheduling even when the work of each loop iteration varies significantly. The tradeoff is, however, that with dynamic load balancing, the identity of the worker that executes a given iteration now varies from run to run in our sample program. As an aside, this nondeterministic behavior is exactly why we should generally frown upon the use of thread-local storage in Cilk Plus. By default, thread-local storage in Cilk Plus is in some sense, "worker-local" storage, because the state gets associated with a given worker thread. The use of worker-local storage exposes the underlying nondeterminism in the runtime scheduler to the programmer writing a Cilk Plus application, something that the design of Cilk Plus specifically tries to avoid. But I digress, as this topic is best left for another day... Returning to the story of DotMix, after some investigation of the simple parallel loop, we concluded that these two implementations of rand() left us with nondeterministic Cilk Plus programs, a situation that was less than ideal. Could we make these programs deterministic? Strand-local RNGs After thinking about the parallel loop example for a bit, we came up with a third, deterministic implementation for rand(), based on "strand-local" RNGs. In the previous code example, each iteration of the cilk_for loop represents a separate strand, a serial chain of instructions without any parallel control in it. Moreover, each strand has an obvious name that does not change between program runs — the value of the loop index for the iteration. If we create a new RNG for each loop iteration, with the loop index being the RNG's seed, then each iteration uses an RNG whose identity is independent of the worker the Cilk Plus runtime used to execute that iteration. Thus, we get a deterministic parallel RNG for the parallel loop. This notion could be generalized to an arbitrary Cilk Plus program by creating a new RNG for every strand. As long as you can find a unique and deterministic name for every strand to use as the initial seed for the RNG, then you get a deterministic RNG. One can think of this approach as using a "strand-local" RNG, since each strand has a separate RNG. As we were investigating prior work on the topic of parallel random-number generation, we discovered that this notion of a strand-local RNG was conceptually equivalent to solutions that were already implemented and used in practice. For instance, SPRNG, a popular library for parallel random-number generation, provides an API for essentially spawning new RNGs. If we spawned a new RNG for every cilk_spawn statement in rand_fib, then we get a deterministic parallel RNG. Problem solved... right? :) As you might guess, the answer was "not quite." Using strand-local RNGs leads to a number of issues in Cilk Plus, but the biggest one we encountered was overhead. Creating a new stream RNG can be an expensive operation, possibly much more expensive than the overhead of a cilk_spawn operation itself. For a program like rand_fib, creating a new RNG for each strand of execution is analogous to going on a vacation and buying a new car at my destination instead of renting one at the airport. While it might be nice to have my own car while I am traveling, it is not going to be worth the cost unless I am going to be on vacation for a really long time. The situation with a typical stream RNG is analogous. A good stream RNGs is designed to generate a long stream of n random numbers, where n is usually measured in at least millions or billions. On the other hand, a typical Cilk Plus program like rand_fib might have millions of strands, but with each strand asking for at most ten or a hundred random numbers. Creating and seeding a new stream RNG for each strand can be quite expensive, and most of its capabilities are going to waste. In parallel programs where users create and manage pthreads explicitly, creating a new stream RNG for each pthread is usually affordable, since creating a new pthread is relatively expensive. In contrast, strands in Cilk Plus are created at every cilk_spawn or cilk_sync, and the cost of creating a new RNG is often unacceptable. Using a new stream RNG for every strand works, but it does not really use the RNG in the way it was originally meant to be used. Hashing for DPRNGs After that long explanation, I've finally arrived at the approach we settled upon for implementing a DPRNG in Cilk Plus. We decided to implement the DotMix DPRNG based on two ideas: hashing and pedigrees. Hashing enables DotMix to generate random numbers for strands in a Cilk Plus program. Consider a special case, where every strand only needs to generate a single random number. One can conceptually break a strand into multiple strands if more than one random number is required. Instead of creating a new stream RNG for each strand, one can instead generate a random number by applying a "good" hash function to the strand. At the extreme, one could imagine using a cryptographic hash function, like a variant of SHA or md5sum to generate a number that looks sufficiently random. Alternatively, using a simpler hash function might produce numbers that are "less random", but might be more efficient to compute. But what does it mean to hash a “strand”? That is where pedigrees come in. Pedigrees provide the key to guaranteeing deterministic execution for DotMix. Suppose that the Cilk Plus runtime provided support for figuring out the pedigree of a currently executing strand — a unique name for that strand that remains the same between different runs of the program. Then, one can generate a random number by hashing the pedigree of strand. Conceptually, the idea of pedigrees seems fairly simple and intuitive. The hard part is, of course, figuring out exactly what the pedigrees should be... more on this topic in a bit. Once we had a way to assign pedigrees to strands, hashing pedigrees gave us a way to implement an efficient DPRNG for Cilk Plus. Our DPRNG is called DotMix because it hashes a pedigree by computing a dot product of the pedigree with entries from a small table of pre-generated random numbers, and then mixes the result to output a random number. In our paper, we show that the hash function satisfies some nice mathematical properties, namely that the probability of getting collisions is reasonably small. We also ran some tests on DotMix's hash function using the Dieharder test suite, and observed that DotMix seems to generate random numbers of reasonably high quality. If your application has stringent requirements on random-number quality, then DotMix may or may not be appropriate. But DotMix's hash function is intended to be more than good enough for codes where you simply need output that "looks" random. If you don't want to take our word for it that DPRNG via hashing is a viable approach, I would encourage you to look at this recent paper on parallel random-number generation, which won the Best Paper Award at Supercomputing 2011. This work, which was done independently from our research at about the same time, discusses ways to generate random numbers by hashing counter values. This approach is conceptually equivalently to hashing a pedigree which is the loop index in a parallel loop. Their paper focused more on the question of picking good hash functions, as opposed to describing a way to maintain pedigrees in programs with complex parallelism structure. There is more that one might say about hashing for a DPRNG, but I will refer the interested reader to the relevant papers for details. Instead, I'll spend the remainder of this post talking about pedigrees, since in my (obviously biased) opinion, pedigrees are the more exciting concept for Cilk Plus. :) Pedigrees in Cilk Plus? For programs whose parallelism has a nice, predictable structure, there are often obvious ways to conceptually assign pedigrees to strands. For example, in a simple cilk_for loop where each iteration requires a single random number, one can use the loop index as the pedigree. In a slightly more complicated case, if iteration i requires ni random numbers, the pedigree could be a pair of numbers — the loop index i, plus the count of how random numbers we've taken from iteration i thus far. This latter case conceptually requires two terms in the pedigree instead of one, because in a parallel execution we need to be able to figure out the pedigree of each iteration i independently, without knowing exactly how many random numbers will be needed in other iterations. For a Cilk Plus program with a more complicated parallelism structure (e.g., rand_fib), how can we define pedigrees? Recall the sample rand_fib program using DotMix that I showed last time (also repeated in the figure below). Notice that the code above makes no mention of pedigrees anywhere, which means that Cilk Plus and the DotMix library are secretly working to maintain and utilize pedigrees under the covers. How does it all work? Since this post is long enough already, I will actually postpone that discussion until next time. Instead, I'll conclude with an open-ended puzzle. The figure below shows a dag (directed acyclic graph) representing the calls to the DotMix DPRNG in an execution of rand_fib(4). Each letter conceptually represents a call to generate a random number. For example, A is the first call to get() in rand_fib(4) before the cilk_spawn of rand_fib(3), and K is the call to get() immediately after the cilk_spawn. Each call to get() needs to read in a unique but deterministic pedigree. How might we assign pedigree values to each letter? In general, many answers are possible. Next time, I'll describe the answer we ended up with. Keep in mind, however, that a good scheme for assigning pedigrees should not introduce additional dependencies into the dag below. For example, the pedigree value for K, should not depend on how many calls to get() happen within rand_fib(3). For those of you who want the answers now, you can always check our paper or other documentation. :) Summary The DotMix deterministic parallel random-number generator (DPRNG) avoids the performance problems and nondeterminism that you can run into when using a traditional serial stream-based random-number generator in Cilk Plus. DotMix generates deterministic random numbers by hashing pedigrees, names for strands in the execution of a Cilk Plus program. Stay tuned for my next post, where I will talk more about pedigrees and how you can use them in Cilk Plus! For more information about Intel Cilk Plus, see the website. For questions and discussions about Intel Cilk Plus, see the forum. Add a CommentTop (For technical discussions visit our developer forums. For site or software product issues contact support.)Please sign in to add a comment. Not a member? Join today
https://software.intel.com/en-us/articles/a-dprng-for-cilk-plus
CC-MAIN-2017-13
refinedweb
3,565
50.46
#include <sysc/kernel/sc_spawn.h> This templated helper class allows an object to provide the execution semantics for a process via its () operator. An instance of the supplied execution object will be kept to provide the semantics when the process is scheduled for execution. The () operator does not return a value. An example of an object that might be used for this helper function would be void SC_BOOST bound function or method. This class is derived from sc_process_host and overloads sc_process_host::semantics to provide the actual semantic content. sc_spawn_object(T object, const char* name, const sc_spawn_options* opt_p) This is the object instance constructor for this class. It makes a copy of the supplied object. The tp_call constructor is called with an indication that this object instance should be reclaimed when execution completes. object = object whose () operator will be called to provide the process semantics. name_p = optional name for object instance, or zero. opt_p -> spawn options or zero. virtual void semantics() This virtual method provides the execution semantics for its process. It performs a () operation on m_object. Definition at line 75 of file sc_spawn.h. Definition at line 77 of file sc_spawn.h. Definition at line 81 of file sc_spawn.h. Definition at line 87 of file sc_spawn.h.
http://www.cecs.uci.edu/~doemer/risc/v030/html_oopsc/a00198.html
CC-MAIN-2018-05
refinedweb
208
51.14
I've made a good first attempt at writing a HAMMER filesystem recovery directive to the hammer utility. It is now in HEAD. It works on the filesystem image (similar to the 'show' directive): hammer -f <device> recover <empty_target_dir> This is not a fsck. The filesystem image is only scanned, not modified. The reconstructed filesystem is stored in the target directory. Currently the directive has some limitations. It does require the volume header to be intact, it doesn't deal with hardlinks (only one namespace will be restored), and it does not try to recover the uid, gid, modes, or times. Only files and directories are recovered. Softlinks are ignored at the moment. Data CRCs are not validated at the moment. Most of these limitations can be overcome with more work. -- The actual operation of the recover directive is very cool. Apart from the needing the volume header it just scans the image straight out looking for blocks that contain B-Tree nodes. It then scans those nodes looking for inode, directory-entry, and file-data records and creates them on the fly (piecemeal) in the <target_dir>. Files and directories are initially named after their object id but as the operation continues and more fragmentory information is recovered the program can reconstruct the actual file and directory names and the actual directory hierarchy, which it does by renaming the temporarily named and placed files and directories as appropriate, on the fly. HAMMER might never have an fsck command but we do now have a 'recover' feature and I expect it will get better and better as time passes. -Matt Matthew Dillon <dillon@backplane.com>
http://leaf.dragonflybsd.org/mailarchive/users/2010-08/msg00035.html
CC-MAIN-2013-20
refinedweb
274
55.54
Bug #4994 DelegateClass don't find extern global public method in 1.9.2 Description How to reproduce: require 'delegate' require 'pp' def test_that?(str) str.size > 0 end class String2 < DelegateClass(String) def initialize(*param) @s = String.new(*param) super(@s) enddef dummy test_that?(@s) end end s2 = String2.new("pipo") pp s2.dummy The code above works under 1.9.1 and 1.8 but not under 1.9.2 - ruby1.9.1 -v => ruby 1.9.1p378 (2010-01-10 revision 26273) [x86_64-linux] - ruby1.8 -v => ruby 1.8.7 (2010-01-10 patchlevel 249) [x86_64-linux] - ruby -v => ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-linux] error message: !ruby draft/tdelegate.rb draft/tdelegate.rb:15:in dummy': undefined methodtest_that?' for "pipo":String2 (NoMethodError) from draft/tdelegate.rb:21:in ` ' shell returned 1 History #1 Updated by Nobuyoshi Nakada over 2 years ago - Status changed from Open to Rejected It's a spec change for . - Delegator now tries to forward all methods as possible, - but not for private methods, and - test_that? is a private method. Probably, it would need a way to tell how the method is called in method_missing. #2 Updated by Sylvain Viart over 2 years ago Sorry, but links are in Japanese. I can read the code, but not why the DelegateClass shouldn't search the toplevel method, any more? Could you translate or post a link to an English doc? For the correction you suggest, I've wrote this code: I don't like this usage as a Delegation. May be I missed something. Edited: Wrong solution, for this method_missing() see comment after require 'delegate' require 'pp' def test_that?(str) str.size > 0 end class String2 < DelegateClass(String) def initialize(*param) @s = String.new(*param) super(@s) enddef dummy test_that?(@s) # this method is really missing bla() end def method_missing(m, *args, &block) begin Object.send(m, *args, &block) rescue NameError =>e # doesn't work with NoMethodError, it loops raise "no method found: '#{m}'" end end end s2 = String2.new("pipo") test Delegated method¶ pp s2.size call with method_missing()¶ pp s2.dummy output (ruby -v ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-linux]) :!ruby draft/tdelegate.rb 4 draft/tdelegate.rb:24:in rescue in method_missing': no method found: 'bla' (RuntimeError)method_missing' from draft/tdelegate.rb:20:in from draft/tdelegate.rb:16:in dummy' from draft/tdelegate.rb:32:in ' #3 Updated by Sylvain Viart over 2 years ago The Issue topic could be rewritten: "DelegateClass don't lookup toplevel method in 1.9.2" Reading and patching the delegate.rb: I've found that it's related to BasicObject's behavior. DelegateClass somewhat inherit from BasicObject, not Object. This issue follow the same pattern as #3768. The documentation should be updated how to fix that (toplevel method resolution). But may be, I'm still miss something about the new Spec about DelegateClass. #4 Updated by Sylvain Viart over 2 years ago Fixed DelegateClass with method_missing(), somewhat ugly right? require 'delegate' def hello :hello end class MyInt < DelegateClass(Integer) def initialize(value) @i = value super(@i) # I want some toplevel here @hello = hello() end def method_missing(m, *args, &block) if __getobj__.respond_to?(m) __getobj__.__send__(m, *args, &block) else Object.send(m, *args, &block) end end end ii = MyInt.new(2) puts p"should call Integer#to_si: #{ii}" puts "ii.class=#{ii.class}" output ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-linux] should call Integer#to_si: 2 ii.class=MyInt Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/4994
CC-MAIN-2014-10
refinedweb
595
62.44
This is your resource to discuss support topics with your peers, and learn from each other. 11-01-2012 09:59 AM So how do you tell so called IDE to exclude compiling a C++ file And NO, the 'Exclude resource from build' property for the file does not work... 11-01-2012 10:12 AM Hi - could you explain exactly what you are trying to achieve? you include files with the #include directive. If you don't #include it - it will be excluded. you can always use other pre-processor directives to create conditional compiles .. e.g. #ifdef ... #endif which will be evaulated before the compilation actually starts. Hope this helps a bit. 11-01-2012 10:47 AM We have a lot of C code shared across multiple platforms, multiple projects etc. The code is split into various folders on disk + under version control. Not all the code in a particular folder is applicable to every app. In effect its a library of source code where individual apps pick and choose which bits they need depending on functionality of that app. Because we are dealing with Eclipse, it thinks that practically any file on your hard disk should be part of a project. It therefore decides all by itself that every single file in a folder pointed at must be part of your project - even if you only want a subset of the files. In the case of regular Eclipse you can right click the file and set the 'Exclude resource from build' in the properties. for the file This means it wont compile the file even though its listed in the Project Explorer. Not great but it works. In Momentics-Eclipse you can right click the file, set the Exlude from build and it completly ignores the setting. When you build it insists that it compile the file even though you know i) its not going to work, ii) you dont want it compiled, iii) you told the IDE not to compile it etc So how do you tell Momentics NOT to compile some/any files that happen to exist in a folder in the build of a project ? 11-01-2012 10:58 AM Ah - I think I understand the issue now - thanks. I think the problem is not in Momentics, but rather in the .pro file which, by default has all headers and sources included from the main src folder. This is actually what gets used when you make a build. When you open the .pro file - you will see something like INCLUDES+= /src/*.h which will include all .h files from that folder. So - you have to do this per project and then you should be able to use INCLUDES+= and INCLUDES-= to achieve what you want. Not elegant - I know - struggling myself with Eclipse. I wish the BB10 team opted for a version of QtCreator. I think it's a much nicer IDE and with the Qt backing in BB10 anyway - it would have been a great choice. 11-01-2012 01:10 PM Here is the example in my project INCLUDEPATH += ../src SOURCES += ../src/*.cpp ../src/model/*.cpp ../src/service/*.cpp ../src/qjson/*.cpp ../src/qjson/*.cc HEADERS += ../src/*.hpp ../src/*.h ../src/model/*.hpp ../src/service/*.h ../src/qjson/*.h ../src/qjson/*.yy 11-01-2012 02:11 PM So basically - you would group the files logically into folders, then in your .pro file - include or not include the entire folder. that should solve the problem. 11-01-2012 02:50 PM well no, not really. Folder content is primarly driven by 'project' which has nothing to do with Eclipse/Momentics etc, and often very little to do with an end product. This in turn is managed by our Version Control. Changing folder content and or re-directing files from Version control to other folders so a specific product whilst technically possible is highly undesirable. As far as I can see what we actually need to do is simply list the files explictly in the .pro file that we want to be compiled. Assuming that works, thats fine + no different to marking files as Excluded from product in the IDE. We made the fundamentally flawed assumption that Eclipse/Momentics was vagely sensible....+ that because Momentics is Eclipse it would be vagely working the same - 11-01-2012 02:58 PM Listing files individually should certainly work. The .pro files are used in standard Qt development and I think that's were those are coming from. In Qt - a .pro file can include other .pro files. So traditionally - what I have been doing in my Qt projects regarding source code libraries is create a .pro file for each "library", then in the main .pro file - you would include the libraries you need which, in turn, will include the files needed. May be something to look at, but at least now you should know why are the files included and how to remedy the problem. Hope my answers helped.
https://supportforums.blackberry.com/t5/Native-Development/How-do-you-exclude-a-C-file-from-the-build-in-Momentics/m-p/1972703
CC-MAIN-2016-44
refinedweb
836
73.17
The Managed UxTheme assembly is a .NET wrapper around Windows XP's Theme API. It can be used safely from C#, on any Windows platform that supports the .NET framework. In addition to exposing the UxTheme API, it also exposes the static data in TmSchema.h (part of the Platform SDK) that is used to define what window classes can be themed, the parts of those classes, and the states that each part may have a custom look for. Windows XP uses a C-style DLL to expose its theme functionality named UxTheme.dll (the "Ux" stands for User eXperience). At my day job, I am working on a couple of .NET custom controls written in C#. Of course one of the requirements for these controls is that they use Windows XP themes when appropriate, and mirror the user's current theme settings. The problem with UxTheme.dll is that it is only available on Windows XP, so any P/Invoke code that tries to use it directly will fail on other versions of Windows. Pierre Arnaud has written a C++ wrapper DLL that can be safely called via P/Invoke on any .NET capable version of Windows. His implementation uses David Y. Zhao's nice little C++ wrapper class that dynamically links to UxTheme.dll, while also providing safe fail-thru implementations of each method, if it can't load the UxTheme library. I like Pierre's solution and started out by using it. It didn't however provide me with the level of functionality that I need from UxTheme.dll. My next thought was to extend his work and continue using P/Invoke from C#. I have however been looking for an excuse to do something more than the "Hello World!" in Managed C++, and this seemed like the perfect opportunity. This implementation also uses David's wrapper class (as do almost all of the examples of XP theme related code on CodeProject. It really is a nice piece of work!). There are two discrete parts of this assembly from the outside perspective. The first is the UxTheme class. This class provides a managed wrapper around an HTHEME handle. It exposes all of the methods on the UxTheme DLL that takes an HTHEME as instance method. Methods that do not take a theme handle are exposed as static. Any thing it is possible to do using the DLL's API should be possible through this class. UxTheme HTHEME Parallel to this is an object hierarchy that wraps the property table data created by TmSchema.h and Schemadefs.h. These two files are part of the platform SDK and define the data model for what parts of the Windows UI can be theme-ed. This object hierarchy starts with the ThemeInfo class. This class contains some simple meta-data about the current theme, but also contains a collection of WindowThemes. A WindowTheme represents data about the parts of a particular window class. The WindowTheme class has a collection of ThemeParts, and ThemePartStates. ThemeInfo WindowThemes WindowTheme ThemeParts ThemePartStates A ThemePart represents a discrete part of a window class. The down button for instance, is part of the ScrollBar window class. ThemePart ScrollBar Each WindowTheme and ThemePart can have 0 to n states. ThemePartStates are things like normal, disabled, or hot. The WindowTheme class has an UxTheme property that can be used to get an instance of the UxTheme class specific for the window class. ThemePart and ThemePartStates also contain some instance methods so that they can be rendered directly into a graphics context. normal disabled hot The UxTheme class can be used without the ThemeInfo hierarchy, but the hierarchy does put a more OO face on the whole thing. Using the code is pretty straightforward. All of the classes are in the namespace System.Windows.Forms.Themes. The only public ally creatable class is ThemeInfo. To drill your way down through the window class, their parts and states, create an instance of ThemeInfo and start looking through its collection of WindowThemes on down. System.Windows.Forms.Themes You can get an instance of UxTheme, either from an instance of WindowTheme, or by calling either of the static methods UxTheme::OpenTheme or UxTheme::GetWindowTheme. UxTheme::OpenTheme UxTheme::GetWindowTheme Make sure that you look at UxTheme::IsAppThemed in you code before trying to use any of the other methods of UxTheme. This will tell you whether the current OS supports themes and if so whether it is currently themed. And don't forget, that this can change throughout the lifetime of your application, because the user can turn off themes at any time. UxTheme::IsAppThemed The assembly does have a strong name, so you can put it in the GAC if you want. The demo project contains a re-implementation of David's Theme Explorer application, written in C# and using the managed API. It also include a slight re-work of the TabPage compatible controls that Pierre has made available. They are included merely as a demonstration of how this assembly can be used from custom control implementations. Managed C++ DLLs have some interesting constraints about entry points. Since this DLL uses the C-Runtime it is a mixed mode DLL. It is compiled with the /NOENTRY linker flag. Assemblies linked with this flag do no have an explicit entry point, preventing the initialization of any static data aside from simple, integral types. It would be nice to be able to use the CVisualStylesXp class as a static instance, but because the runtime does not initialize it, this isn't possible. CVisualStylesXp Each instance of UxTheme creates a member pointer to an instance of the CVisualStylesXp class. UxThemes's static methods create an instance of this class locally. The result of this is a lot of calls to LoadLibrary and FreeLibrary. Luckily Windows keeps a reference count on dynamically loaded DLLs, so this shouldn't pose a significant performance hit. UxThemes LoadLibrary FreeLibrary Be that as may, it is something that bugs me about my implementation. If anybody's got a good way to load and free the UxTheme DLL just once, and avoid creating so many instances of the C++ wrapper class I'd love to hear it. MSDN recommends implementing explicit Initialize and Deinitialize methods, but I don't like forcing that on users of this assembly. Initialize Deinitialize.
http://www.codeproject.com/Articles/4771/A-Managed-C-Wrapper-Around-the-Windows-XP-Theme-AP?msg=1626474
CC-MAIN-2015-11
refinedweb
1,060
64.81
On Fri, 2003-01-03 at 08:42, Roland Caßebohm wrote: > Hi, > > I'm using the OpenBSD stack. > > I have the following problem: > The write() function returns EINVAL, if the TCP connection is close from the > client. > Is this right? What would you have it return? Note that this value (EINVAL) is unchanged from what the actual BSD code uses (in other words, I didn't change it for eCos). Also note that the new stack (the FreeBSD one) handles this differently. > > The following code in tcp_usrreq() generates the error: > > inp = sotoinpcb(so); > /* > * When a TCP is attached to a socket, then there will be > * a (struct inpcb) pointed at by the socket, and this > * structure will point at a subsidary (struct tcpcb). > */ > if (inp == 0 && req != PRU_ATTACH) { > splx(s); > /* > * The following corrects an mbuf leak under rare > * circumstances > */ > if (m && (req == PRU_SEND || req == PRU_SENDOOB)) > m_freem(m); > return (EINVAL); /* XXX */ > } > -- .--------------------------------------------------------. | Mind: Embedded Linux and eCos Development | |--------------------------------------------------------| | Gary Thomas email: gary.thomas@mind.be | | Mind ( ) tel: +1 (970) 229-1963 | | gpg: | '--------------------------------------------------------' -- Before posting, please read the FAQ: and search the list archive:
https://sourceware.org/pipermail/ecos-discuss/2003-January/018650.html
CC-MAIN-2022-21
refinedweb
181
72.76
Getting Started This guide will walk you through the process of adding routing to a fresh create-react-app project, using Navi. Basic Components Navi’s <Router> component lies at the top of most Navi apps. It lets you add declarative, asynchronous routing to your React app. To get started, just render a <Router> in the index.js file generated by create-react-app. You can declare your routes with Navi’s mount(), route() and other matcher functions — like so: import { lazy, mount, route } from 'navi' import { Router } from 'react-navi' // Define your routes const routes = mount({ '/': route({ title: 'My Shop', getData: () => api.fetchProducts(), view: <ShopLandingPage />, }), '/products': lazy(() => import('./productsRoutes')), }) // Pass them to a `<Router>` <Router routes={routes}> ... </Router> Ok, so you’ve got a <Router> with a couple routes, including a shop’s landing page and a lazily loadable /products URL. Next step: you need to decide where to render the current route’s view element. And to do that, you just plonk down a <View /> somewhere inside your <Router>; it could, for example, go inside a <Layout> component that renders some <Link> elements. import { View } from 'react-navi' ReactDOM.render( <Router routes={routes}> <Layout> <View /> </Layout> </Router>, document.getElementById('root') ) Simple, huh? But waaait a minute… what if you view the lazily loadable /products URL? Then the route will be loaded via an import(), which returns a Promise, and so at first there’ll be nothing to render. Luckily, React’s new <Suspense> feature lets you declaratively wait for promises to resolve. So just wrap your <View> in a <Suspense> tag, and you’re off and racing! Routing Hooks Did you notice how your route defined a getData() function along with its view? route({ title: 'My Shop', getData: () => api.fetch('/products'), view: <Landing />, }) How do you access the data? With React hooks! Navi’s useCurrentRoute() hook can be called from any function component that is rendered within the <Router> tag. It returns a Route object that contains everything that Navi knows about the current URL — including any data you’ve declared on you routes. Ok. So far, so good. But imagine that you’ve just clicked a link to /products — which is dynamically imported. It’s going to take some time to fetch the route, so what are you going to display in the meantime? Well, the first option is to use <Suspense> to show a fallback while it’s loading. But this has the problem of jumping around all over the place as it hides and shows loading indicators. What you’d really like to do is to display a loading bar over the current page while the next route loads… well, unless the transition only takes 100ms. Then you probably just want to keep displaying the current page until the next one is ready, because showing a loading bar for only 100ms doesn’t look great either. So how do you show a loading indicator, but only when the route doesn’t load instantly? This would usually be incredibly complicated to accomplish, but Navi makes it simple: you just use the useLoadingRoute() hook. Here’s how it works: useCurrentRoute() returns the most recent completely loaded route. And useLoadingRoute() returns any requested-but-not-yet-completely-loaded route. Or if the user hasn’t just clicked a link, it returns undefined. Simple, huh? Error Boundaries One of the things about asynchronous data and views is that sometimes they fail to load. Luckily, React has a great tool for dealing with things that throw errors: Error Boundaries. Let’s rewind for a moment to the <Suspense> tag that wraps your <View />. When <View /> encounters a not-yet-loaded route, it throws a promise, which effectively asks React to please show the fallback for a moment. You can imagine that <Suspense> catches that promise, and then re-renders its children once it resolves. Similarly, if <View /> finds that getView() or getData() have failed, then the View component will throw an error. In fact, if the router encounters a 404-not-found error, then <View /> will throw that, too. These errors can be caught by Error Boundary components. For the most part, you’ll need to make your own error boundaries, but Navi includes a <NotFoundBoundary> that can catch Not Found errors for you and show a fallback message instead: What’s Next? You can get a long way with Navi by just using these components and a few basic matcher functions. However, if you take a look underneath all the components, Navi is actually written in pure JavaScript — that’s why it’s split into two packages: navicontains a pure JavaScript implementation of the core routing code react-navigives you a thin wrapper around the core that’s designed specifically for React If you want to understand the full set of possibilities that Navi opens up, you’ll want to understand the core concepts of Navi itself — requests, routes, and matchers. As it happens, you’ve already run into each of these concepts in this page — and we’ll cover them in a moment. But first, let’s take a brief look at URL parameters.
https://frontarm.com/navi/en/guides/getting-started/
CC-MAIN-2022-21
refinedweb
856
71.85
Using Files in Your Interfaces is Not a Good Idea I've been using some proprietary frameworks and libraries, that are using File objects, and only File objects, everywhere in the interfaces. It's not that is bad to have methods accepting File objects, but it's not wise to accept only that. For example the DOM/SAX XML parser in the JDK, accepts File objects but includes also other sources like InputStream, URLs, etc. The Properties class doesn't even accept File objects. In fact, the File class is not very popular in the interfaces of good Java libraries around, including the JRE. The reason is simple: InputStream is much more abstract than File. When you specify, for instance[1], an interface like this: public void sendMail(String email , String message, File attachment); You have chained the user to specify the path of an existing File on the local disk. Now, imagine that your application has to send an email about a critical error happening in production to notify the system administrators, and you want to include the stacktrace of the Exception to help tracing the problem. The only option you have is to print the stacktrace into a temporary file, call the method, and then delete the File. That's because an attachment is a "content" not a File. Also the configuration of an application is "content" not a File. An XML file and your application icons as well, are not just Files, they are data and resources. And data, resources, and contents, they may not just come from Files, but from remote Systems (URLs), from databases, from a jar in the classpath, or other systems you may not even yet imagine. The example above would have a much better interface if it was defined like this: public void sendMail(String email, String message, InputStream attachment); Now... if you want to attach a stacktrace you can do something like: try { ... } catch (Exception ex) { sendMail( "foo@bar.com", "something bad happened!", new ThrowableInputStream(ex)); } How hard is to transform an exception to an InputStream? that's it: public class ThrowableInputStream extends InputStream { private byte[] bytes; private int i = 0; public ThrowableInputStream(Throwable throwable) { this.bytes = toByteArray(throwable); } private byte[] toByteArray(Throwable throwable) { ByteArrayOutputStream output = new ByteArrayOutputStream(); PrintStream printer = new PrintStream(output); try { throwable.printStackTrace(printer); printer.flush(); return output.toByteArray(); } finally { printer.close(); } } @Override public int read() throws IOException { if (i < bytes.length) return bytes[i++]; else return -1; } } Is the above easier or harder to print the exception to a temporary file? And is the above less or more efficient? If you prefer the temporary file... at least remember to remove the file. And don't forget to check if the filesystem is writable... and to verify that there is enough space left on the disk... etc. etc. etc... You want to attach some XML content that you have received in String format? Again, instead of printing the String to a temporary file, etc.etc... you can write a class that converts Strings to InputStream: public class StringInputStream extends InputStream { private String string; private int i; public StringInputStream(String string) { this.string = string; } @Override public int read() throws IOException { if (i < string.length()) return string.charAt(i++); else return -1; } } Just 16 lines of code. The incredible power of URLs Also the configuration of your application is part of the interface with the users... and also here Files have their weakness. But let's go much further with a complex use case: you need to parse some XML data deployed inside a zip file on a remote web server. You have two options: - You may write the url of the file on the webserver into your configuration. Your program will get the URL from the configuration, access the webserver with an http client, read the zipped file, save it somewhere, unpack it, then parse the xml and manage the data. And finally clean up the mess. - Or, you may specify in your configuration a URL like "jar:", and pass that URL to the XML Parser. Done. Now... if you choosed the second option and your application requirement change, and needs to access the same XML file, zipped or not, from a web server, from an ftp server, from the local filesystem, etc. you just need to change the URL in the configuration. And, if you need to access the resource in a protocol that is not supported natively by URL, you can implement an URLStreamHandler. Yes, URL are very powerful and flexible in Java. And they return you an InputStream. Not a File! It's not a casualty. How to "fix" a library with such flaws? Ok... you are the unlucky guy - like me - who has to use bad libraries, more or less frequently. How do you fix an interface like the first sendMail method? (the one with the file) It's not uncommon that such bad designed libraries, are REALLY bad. So you may think to write your own (object oriented) interface over it: a wrapper or something like that. Your interface will use an InputStream, of course. So you need something that converts your InputStream to a File that can be digested by the bad library. Here is an example: public class FileDump { private static final Random random = new Random(); public static File dump(InputStream input, String fileName) { File path = createTempDir(); return dump(input, path, fileName); } private static File createTempDir() { String tmpDirName = System.getProperty("java.io.tmpdir"); File tmpDir = new File(tmpDirName); File newTempDir = new File(tmpDir, String.valueOf(random.nextInt())); newTempDir.mkdirs(); return newTempDir; } private static File dump(InputStream input, File path, String fileName) { File file = new File(path, fileName); try { OutputStream output = new FileOutputStream(file); try { int data; while ((data = input.read()) != -1) output.write(data); } finally { output.close(); } return file; } catch (IOException ex) { throw new RuntimeException(ex); } } } The above class is an example. You can improve it checking that there is space on disk, that the target directory is writable, etc. Then remember to remove the file after the bad library used the file's content. Conclusions - Next time you're defining an interface of a library and, for any reason, you find yourself importing java.io.File... think about using (also) an InputStream[2] please. - If you are tempted to write in your configuration a "file path" of a resource that you need to read, think about the opportunity to use an URL with file:/path/to/file.ext protocol. Take in mind by the way, that unfortunately you cannot write to a Java URL, independently from the protocol, but you can still obtain the file path from a URL using the "file" URI scheme. Notes [1] The above example is not a good interface of a mailer object. It's just for the purpose to explain why to specify a File as attachment is not a good idea. [2] The above discussion is valid not only for InputStream and OutputStream but as well for Reader and Writer. You should use Reader/Writer to deal with textual data, while InputStream/OutputStream to deal with binary data. From (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) Fab Mars replied on Sat, 2010/09/11 - 7:17am RIchard replied on Sat, 2010/09/11 - 11:58am Alexander Radzin replied on Sat, 2010/09/11 - 1:18pm Absolutely agree with your approach. As a guy that deals with bad interfaces too I can say that I saught about other trick that can help to abuse File. File does not implement IO operations itself. It is done in FileInputStream and FileOutputStream that do almost nothing too using FileDescriptor internaly. If you manage to change file descriptor of FileInputStream that wraps your file you can make the rest of the application to work with "file" that is actually not a file at all.This can be implemented using reflection. This is instead creating temp file that can be problematic in some cases due to security restrictions. But I have never try to implement this... Luigi Viggiano replied on Sat, 2010/09/11 - 2:37pm «hub and spoke API». Richard: where I can read more about this approach? an example somewhere? Personally I don't dislike Streams, I find them quite flexible and handy. Luigi. Roger Voss replied on Sun, 2010/09/12 - 2:45pm In the example bad interface discussed here, it is likely that the library will internally use the File object to create a FileInputStream. There may even be a private or protected helper method in the library that accepts either FileInputStream or InputStream and then proceeds with the desired i/o operation. At any rate, if we can look at the source code (or perhaps a dump of the class information from the .class file), we can write an AspectJ aspect that provides a new method in the interface accepting InputStream. We either call the helper function (which this aspect will be able to do because after byte code weaving into the target class, it's an actual method of the class and can access private and protected class members), or we lift the source code of the existing File-based method and re-implement it in an aspect-based new method to where it operates with just an InputStream as an input argument). Once we have a core method that accepts an InputStream, we can then use aspects to weave in other methods taking URLs and the like, and which eventually call the InputStream-based method. We wind up with an interface to the library that is ideally designed to the way we prefer. By using compile time weaving of the AspectJ aspects, we can use just the distribution .jar file of the library as input at build time. The AspectJ build-time output will be a new version of the .jar that sports the refactored interface that we consider to be ideal for the library. This then becomes the .jar file of the library that our overall project proceeds to make use of. While only relying on the .jar of a third-party library (thus not having to mess with being able to rebuild said library - which can sometimes be a complex endeavor), and perhaps the ability to inspect its source code, we can utilize AspectJ and its compile-time weaving option as a technique to address bugs and/or refactor interfaces. We can always send our fixes and enhancements to the library author(s) and hope they eventually incorporate them, but in the meantime, AspectJ enables us to immediately proceed with our own project work. Jason Wang replied on Sun, 2010/09/12 - 6:06pm Matthew Sandoz replied on Sun, 2010/09/12 - 9:01pm Luigi Viggiano replied on Mon, 2010/09/13 - 3:35am in response to: Jason Wang True ( see note [2] ). But, for instance, take SAXPArser 's parse method: it doesn't accept a Reader. And from a Reader you can't get an InputStream (while the opposite is true). From an InputStream you can get a reader. So if your XML is stored in a String, you can use my StringInputStream to pass it to the SAXParser, but not the StringReader. For the purpose of using a String as binary attachment, I believe that the above example could fit. Of course, to deal with textual data Reader/Writer are the correct classes to use. Luigi Viggiano replied on Mon, 2010/09/13 - 3:21am in response to: Matthew Sandoz Matthew > another point against using files is that they are final and thus harder to mock with test code. Have a look at mockito. In the context I explained, I had to write some tests mocking File objects: it worked. BTW, I just checked and java.io.File is not final. Andrea Polci replied on Mon, 2010/09/13 - 11:05am in response to: Luigi Viggiano Nice post, I just want to add something to the StringInputStream part. The reason why there is a StringReader but not a StringInputStream in the java library is that while Strings and Readers work with chars (unicode characters) Streams work with bytes. Your code for StringInputStream is dependant on the default charset and in the content of the string. It will usually work if the String contains only plain ascii characters, but youl'll get in trouble otherwise. A better way to get an InputStream from a String is through the ByteArrayInputStream: This is less efficient because the String.getBytes() has to create a new byte[] and to encode all the characters in the String according to the default charset. Alternatively is possible to provide an implementation of StringInputStream that manage the conversion between char and bytes but it will be more complicated cause a single char can result in multiple bytes (and the number of bytes can be different for every char, depending on the charset). Andrea Polci replied on Mon, 2010/09/13 - 11:41am in response to: RIchard The problem with File is not the design of the library but it's usage (or overuse). The File is meant to represent ... a file (you guessed?). Not it's content but exactly what it is (an entry in a directory in the filesystem). Unfortunatly many use a File when what they really need is something to access the content. King Sam replied on Fri, 2012/02/24 - 9:31am
http://java.dzone.com/articles/using-files-your-interfaces-0
CC-MAIN-2013-20
refinedweb
2,243
62.78
Subject: [ggl] plan From: Mateusz Loskot (mateusz) Date: 2009-04-22 04:27:56 Barend Gehrels wrote: > Hi, > > To summarize, the plan is on April 25: > > 1) reflect the folder structure to the one in the boost sandbox, so: > - boost/ggl with headers > - libs/ggl with tests, examples, documentation +1 > 2) change "geometry" to "ggl" in #include statements, so e.g. #include > <ggl/core/ring_type.hpp> +1 > 3) change "geometry" to "ggl" in namespaces +1 > 4) not indent namespaces +1 I've already fixed it in io/wkt (r591) > 5) replace tabs with 4 spaces +1 > 6) in the test folder create the structure as in the headerfiles, so > .../algorithms, ../core, ../strategies, etc +1 > 7) rename fromwkt.hpp to read_wkt.hpp and aswkt.hpp to write_wkt.hpp, > and then also streamwkt.hpp to stream_wkt.hpp +1 > 8) move the stuff in multi which is not yet structured to the right > subfolders +1 > All these cosmetic changes will be atomic and I will do most of them > "automatic", I will update the MS project files as well. OK. thanks! I can update Jamfiles. By the way, I've taken the liberty to apply one more cosmetic change: I replaced the TOK aliase with tokenizer name in io/wkt: typedef boost::tokenizer<boost::char_separator<char> > TOK; reads now: typedef boost::tokenizer<boost::char_separator<char> > tokenizer; If you'd agree, I'd suggest to use upper-case for template parameters only, otherwise it's hard to decipher what is what when reading through code and requires to go back up to function/type prototype to find out. Best regards, -- Mateusz Loskot, Geometry list run by mateusz at loskot.net
https://lists.boost.org/geometry/2009/04/0079.php
CC-MAIN-2020-50
refinedweb
276
59.43
Pod::Template - Building pod documentation from templates. ### As a module ### use Pod::Template; my $parser = new Pod::Template; $parser->parse( template => 'documentation.ptmpl' ); print $parser->as_string ### As a script ### $ podtmpl -I dir1 -I dir2 documentation.ptmpl ### A simple module prepared to use Pod::Template ### package My::Module; =Template print_me =head2 print_me( $string ) Prints out its argument. =cut sub print_me { print shift; return 1 } ### A simple pod file named Extra/Additional.pod ### =pod =Template return_vals This subroutine returns 1 for success and undef for failure. =cut ### A simple Pod::Template template ### =Include My::Module =Include Extra/Additional.pod as Extra =pod =head1 SYNOPSIS use My::Module My::Module::print_me('some text'); =head2 Functions =Insert My::Module->print_me =Insert Extra->return_vals =cut Writing documentation on a project maintained by several people which spans more than one module is a tricky matter. There are many things to consider: Should pod be inline (above every function), at the bottom of the module, or in a distinct file? The first is easier for the developers, but the latter two are better for the pod maintainers. What order should the documentation be in? Does it belong in the order in which the functions are written, or ordered by another principle, such as frequency of use or function type? Again, the first option is better for the developers, while the second two are better for the user. How should a function in another file be mentioned? Should the documentation simply say 'see Other::Module', or should it include the relevant section? Duplication means that the documentation is more likely to be outdated, but it's bad for a user to have to read numerous documents to simply find out what an inherited method does. What should be done with standard headers and footers? Should they be pasted in to every file, or can the main file be assumed to cover the entire project? Pod::Template offers a solution to these problems: documentation is built up from templates. Assume that you have a module and a template as outlined in the SYNOPOSIS. Running this template through Pod::Template will result in this documentation: =pod =head1 SYNOPSIS use My::Module My::Module::print_me('some text'); =head2 Functions =head2 print_me( $string ) Prints out its argument. This subroutine returns 1 for success and undef for failure. =cut Use =Include to specify which sources will be used:=Include Some::Module With the =Include directive, it is possible to specify an alternate name to use with =Insert statements:=Include FileName as KnownName If a file extension is not specified, =Include will look first for a .pm file, and then for a file without an extension. You may also specify the path (in which case the complete file name must be provided) to handle situations where there could be namespace collisions:=Include Some::Module::File as SMFile =Include Another/Module/File.pod as AMFile The =Insert function works by including text from the named =Template directive until the first =cut or the next =Template directive. First specify the source, followed by ->, then the =Template directive name:=Insert IncludedFile->marked_text See the samplesdirectory in the distribution for further examples on how to use Pod::Template. Create a new instance of Pod::Template. Optionally, you can provide the lib argument to change the library path that Pod::Template looks for files. This defaults to your @INC. Takes a template file and parses it, replacing all Insert directives with the requested pod where possible, and removing all Include directives. Returns true on success and false on failure. Returns the result of the parsed template as a string, ready to be printed. If this variable is set to true, warnings will be issued when conflicting directives or possible mistakes are encountered. By default this variable is true. Set this variable to true to issue debug information when Pod::Template is parsing your template file. This is particularly useful if Pod::Template is generating output you are not expecting. The default value is false. See the samples directory in the distribution for examples on how to use Pod::Template. If this templating system is not extensive enough to suit your needs, you might consider using Mark Overmeer's OOD.
http://search.cpan.org/~kane/Pod-Template-0.02/lib/Pod/Template.pm
CC-MAIN-2015-27
refinedweb
700
54.93
#include <OnixS/CME/MDH/FIX/Field.h> Represents the field in the FIX message. Exposes operations to access value and make a compatible conversion. Definition at line 32 of file Field.h. Casts the stored value to the requested type. The value may be converted into the value of the requested type if a conversion is possible. Otherwise, an exception is thrown. For example, if the field value represents a reference to a string (e.g. instance of the StrRef class), then it can be successfully converted into a number if the string referred represents a number. Definition at line 102 of file Field.h. Tries to cast the stored value into a value of the MaturityMonthYear type. Returned value indicates whether the casting succeeded. Definition at line 283 of file Field.h.
https://ref.onixs.biz/cpp-cme-mdp3-handler-guide/classOnixS_1_1CME_1_1MDH_1_1FIX_1_1Field.html
CC-MAIN-2022-33
refinedweb
133
61.02
I’ve been writing these posts about my beginner’s experiments with the Kinect for Windows V2 SDK; - - Kinect for WindowsV2 SDK- Hello (Skeletal) World for the JavaScript Windows 8.1 App Developer - Kinect for Windows V2 SDK- Hello (Skeletal) World for the 3D JavaScript Windows 8.1 App Developer - Kinect for Windows SDK V2- Mixing with some Reactive Extensions with reference to the official materials up on Channel 9; Programming-Kinect-for-Windows-v2 I realised that I hadn’t done anything with; - The Infra Red data source that comes from the sensor – this one probably speaks for itself and is documented here. - The Body Index data source that comes from the sensor – for the (up to) 6 bodies that a sensor is tracking, this source gives you frames that contain a 2D grid where each co-ordinate gives you a simple 0-5 integer representing which body (if any) the sensor associates with that co-ordinate. I also realised that I hadn’t tried to do anything where I link together multiple data sources to produce some kind of combined view. I figured that what I could do is take the IR data and display it but correlate it with the body index data to remove any values that didn’t relate to a tracked body (this is far from an original idea – there’s samples like this everywhere although they sometimes use the colour video frames rather than the infra-red frames). I stuck with using a bit of the reactive extensions as per my previous post and started off with a little WPF UI to display 2 images; Simple enough stuff – a wallpaper image underneath another image that I have named myImage. From there, I wrote some code to run when the UI has finished loading; void OnLoaded(object sender, RoutedEventArgs e) { this.OpenSensor(); this.CaptureFrameDimensions(); this.CreateBitmap(); this.CreateObservable(); this.obsFrameData .SubscribeOn(TaskPoolScheduler.Default) .ObserveOn(SynchronizationContext.Current) .Subscribe( fd => { this.CopyFrameDataToBitmap(fd); } ); } I’m hoping that this is a fairly “logical” structure. First off, I open up the sensor which I keep around in a member variable; void OpenSensor() { // get the sensor this.sensor = KinectSensor.GetDefault(); sensor.Open(); } and then I try and figure out what the dimensions are on the frames that I’m planning to deal with – the body index frames and the infra-red frames and I keep a few things around in member variables again; void CaptureFrameDimensions() { this.irFrameDesc = this.sensor.InfraredFrameSource.FrameDescription; this.biFrameDesc = this.sensor.BodyIndexFrameSource.FrameDescription; this.frameRect = new Int32Rect( 0, 0, this.irFrameDesc.Width, this.irFrameDesc.Height); } and then I create a WriteableBitmap which can be the source for the Image named myImage in my UI; void CreateBitmap() { this.bitmap = new WriteableBitmap( this.irFrameDesc.Width, this.irFrameDesc.Height, 96, 96, PixelFormats.Bgra32, null); this.myImage.Source = this.bitmap; } and then finally I attempt to create an observable sequence of frames of data which contain both the infra-red frame and the body index frame. To do that, I made a simple class to hold both arrays of data; class FrameData { public ushort[] IrBits { get; set; } public byte[] BiBits { get; set; } public bool IsValid { get { return (this.IrBits != null & this.BiBits != null); } } } and then I attempted to make an observable sequence of these; void CreateObservable() { this.indexFrameReader = sensor.OpenMultiSourceFrameReader( FrameSourceTypes.Infrared | FrameSourceTypes.BodyIndex); var events = Observable.FromEventPattern<MultiSourceFrameArrivedEventArgs>( handler => this.indexFrameReader.MultiSourceFrameArrived += handler, handler => this.indexFrameReader.MultiSourceFrameArrived -= handler); // lots of allocations here, going for a simple approach and hoping that // the GC digs me out of the hole I'm making for myself 🙂 this.obsFrameData = events .Select( ev => ev.EventArgs.FrameReference.AcquireFrame()) .Where( frame => frame != null) .Select( frame => { ushort[] irBits = null; byte[] biBits = null; using (InfraredFrame ir = frame.InfraredFrameReference.AcquireFrame()) { using (BodyIndexFrame bi = frame.BodyIndexFrameReference.AcquireFrame()) { irBits = ir == null ? null : new ushort[this.irFrameDesc.LengthInPixels]; biBits = ((bi == null) || (irBits == null)) ? null : new byte[this.biFrameDesc.LengthInPixels]; if ((irBits != null) && (biBits != null)) { ir.CopyFrameDataToArray(irBits); bi.CopyFrameDataToArray(biBits); } } } return ( new FrameData { IrBits = irBits, BiBits = biBits } ); } ) .Where( fd => fd.IsValid); } That’s quite a long bit of code As in my previous post, I’ve taken the decision to allocate arrays for each frame that I get off the sensor and “live with the consequences” which (so far) has worked out fine running on my meaty i7 laptop with lots of RAM. I’m actually hoping that the GC mostly deals with it for me given how short the lifetime of these arrays is going to be. So, effectively, this is just trying to use a MultiSourceFrameReader to bring back frames from both the Infrared and BodyIndex data sources at the same time. Where both of those frames can be acquired, this code produces a new instance of my FrameData class with the members IrBits and BiBits containing copies of the data that was present in those frames. It looks a bit “wordy” but that’s what the intention is and most of the code is really there just to make sure that the code is acquiring the 2 frames that I’m asking the sensor for so it’s a couple of AcquireFrame() calls plus some null reference checks. Once that observable sequence is set up, it gets stored into a member variable and then I consume it from that OnLoaded() method (in the first code sample above) and that, essentially, is routing the captured frame data through to a method called CopyFrameDataToBitmap which looks like; void CopyFrameDataToBitmap(FrameData frameData) { this.bitmap.Lock(); unsafe { var pBackBuffer = this.bitmap.BackBuffer; for (int i = 0; i < frameData.IrBits.Length; i++) { // pixels not related to a body disappear... UInt32 colourValue = 0x00000000; int bodyIndex = frameData.BiBits[i]; if (bodyIndex < BODY_COUNT) { // throwing away the lower 8 bits to give a 0..FF 'magnitude' UInt32 irTopByte = (UInt32)(frameData.IrBits[i] >> 8); // copy that value into red, green, blue slots with FF alpha colourValue = 0xFF000000 + irTopByte; colourValue |= irTopByte << 8; colourValue |= irTopByte << 16; // apply a mask to pickup the colour for the particular body. colourValue &= colourMasks[bodyIndex]; } UInt32* pBufferValue = (UInt32*)(pBackBuffer + i * 4); *pBufferValue = colourValue; } } this.bitmap.AddDirtyRect(this.frameRect); this.bitmap.Unlock(); } this is making reference to a little static array of colour-based bitmasks; static UInt32[] colourMasks = { 0xFFFF0000, // red 0xFF00FF00, // green 0xFF0000FF, // blue 0xFFFFFF00, // yellow 0xFF00FFFF, // cyan 0xFFFF00FF // purple }; const int BODY_COUNT = 6; and, essentially, this code is building up the 2D image from the captured frame data by on a per-pixel basis by; - checking the value from the body index frame to see whether the “pixel” should be included in the image – i.e. does it correspond to a body being tracked by the sensor? - taking the high byte of the Infra Red data and using it as a value 0..255 and then turning that value into a shade of a colour such that each body gets a different colour and such that the ‘depth’ of the colour represents the value from the IR sensor. and that’s pretty much it. What I really like with respect to this code is that the SDK makes it easy to acquire data from multiple sources using the same pattern as acquiring data from one source – i.e. via the MultiSourceFrameReader and I don’t have to manually do the work myself. The sort of output that I get from this code looks something like; when I’m a little more distant from the sensor and perhaps a bit more like; for a case where I’m nearer to the sensor and it looks like in this particular case the infrared data is being drawn in yellow. Again, I’m impressed with the SDK in terms of how it makes this kind of capture relatively easy – if you want the code from this post then it’s here for download. For me, I’ve taken brief looks at getting hold of; - video - depth - infra-red - body index - skeletal data from the sensor. I also want to dig into some other bits and pieces and my next step would be to look into some of the controls that the Kinect SDK comes with and how they can be used to “Kinectify” a user interface…
https://mtaulty.com/2014/08/26/m_15409/
CC-MAIN-2019-51
refinedweb
1,358
59.13
Your V-Play Sample Launcher App download has started! This app allows you to quickly test and run all the open-source examples and demo games available with the V-Play SDK. The StyledButton has a gradient, radius and border and an optional flatStyle, ready to be published in games & apps. More... The StyledButton is a skinned Qt Quick Controls Button that has the same style across platforms: a white-to-grey gradient, black textColor, 4px radius and a dark-grey border color. You can also set the flatStyle property to true to get a button without border color and no gradient. You can use this button in published games as it looks more polished than the SimpleButton. It changes its background color when the button is hovered (only works on Desktop with a mouse input) and when the button is pressed. The default size values for StyledButton are the size of the contained text. So the longer the text, the bigger the StyledButton is. You can change the default size and behavior of the StyledButton by creating a new qml file that has the StyledButton as root element and then change your default button style. This custom component can then be used in your game during development, so you can quickly change all your button appearances afterwards. Alternative button components are ButtonVPlay and SimpleButton. See this table when to use which button type: All of these button types can be customized by setting a style. import VPlay 2.0 GameWindow { Scene { StyledButton { text: "Toggle Physics" onClicked { // .. enter code for toggling physics settings here .. } } } } The StyledButton provides several properties that make most popular styling requirements easier without setting an own ButtonStyle. Example for Simple Button Styling: import VPlay 2.0 StyledButton { // these are the default values, change the ones youÄd like to change color: "#ccc" textColor: "black" gradientTopColorLighterFactor: 1.15 radius: 4 borderWidth: activeFocus ? 2 : 1 borderColor: "#888" } For more advanced styling, assign a ButtonStyle to the style property. Example for Advanced Button Styling: import VPlay 2.0 import QtQuick 2.0 import QtQuick.Controls.Styles 1.0 StyledButton { id: styledButton style: ButtonStyle { background: Rectangle { implicitWidth: 100 implicitHeight: 25 border.width: control.activeFocus ? 2 : 1 border.color: "#888" radius: 4 gradient: Gradient { GradientStop { position: 0 ; color: control.pressed ? "#ccc" : "#eee" } GradientStop { position: 1 ; color: control.pressed ? "#aaa" : "#ccc" } } } label: Text { anchors.centerIn: parent font.pixelSize: styledButton.pixelSize color: styledButton.textColor text: styledButton.text } } } The border color of the background rectangle. The default value is a grey color with code "#888". The border color of the background rectangle. The default value is 2 if the button has focus, otherwise 1. Set this property to enable a flat style, which has these changes: The default value is false, so there is no flat style used. Especially for mobile UIs, flat styles are sometimes preferable. The color at the bottom of the color gradient. The default value is color. See also gradientTopColor. The color at the top of the color gradient. The default value is Qt.lighter(color, gradientTopColorLighterFactor). See also gradientTopColorLighterFactor and gradientBottomColor. The gradientTopColor is made ligther by this factor for the top of the used color gradient. The default value is 1.15. To disable the gradient, set this factor to 1. See also gradientTopColor and color. The background rectangle radius. The default value is 4. Voted #1 for:
https://v-play.net/doc/vplay-styledbutton/
CC-MAIN-2017-09
refinedweb
559
60.92
rofl .... a little bird whispered in my ear that rubys terminal is called 'irb' looks like ruby i tried lua and it didn't work does'nt ruby work in a terminal?]]> knowing Mr. Green, its probably lua, but could theoretically be ruby... Or he made it up. :-D Dusty]]> whatlanguageisthatgreen?]]> =>" xersex2 is very cool ".tr(" ", "") =>"xersex2isverycool" How embarrassing. trim() is java syntax. Under python I was thinking of strip. It works as so: >>>>> g.strip() 'arch linux' i haven't seen that trim thing dusty is talking about but replace() fix stuff like this, >>>>> g = g.replace(" ", "") >>> print g archlinux Yeah, I'm a troll. :-D Dusty]]> better to run a python instance and run string.trim() But trim won't removing it from inside the string... you want replace(' ',''); #include <boost/string_algo/trim.hpp> std::string s = " i hate whitespace "; boost::trim(s); s = "~~~~~some other crap~~~~~"; boost::trim_if(s, std::equals<char>('~')); Yeah, thats right. I was going to learn c and python at the same time, but then i got so excited about c...well... I can see what your saying about the python syntax...its very easy to read. I like Java. Dusty I haven't seen you say something like that in a very long time.]]> Got bogged down in C! :-D I don't understand when people who can program in one language say "I was going to learn Python". Python doesn't have to be learned. If you understand structured and/or object oriented programming, you know Python already; its just the natural syntax you would use. I like Java. Dusty]]> Actually I was going to try to learn some python...I don't know what happened...]]> No, you'll be an efficient hacker... Dusty]]>
https://bbs.archlinux.org/extern.php?action=feed&tid=15052&type=atom
CC-MAIN-2018-05
refinedweb
293
87.21
Your first program in C++ From cppreference.com The following is one of the most basic programs that is possible to write in C++. This should give you a first feel of how you can get a computer to do things for you. Be sure to click the Run this code button which will compile and run the code on the server. Run this code Output: Hello World! Let's analyze the code line by line. #include <iostream> - #include is a preprocessor directive that allows code to include functionality from somewhere else. In this case, the standard iostreamlibrary is included. It contains the declaration of std::cout, which we use later; without this line, the compiler would not recognize std::coutthat appears later, and we would get an error. int main() - This is the main function. The main function is a function that defines the user code which is run when the program is launched. Every program must have a function called main, otherwise the computer would not know where to start the program. - All function definitions have four parts: - A return type (in this case int), which specifies what value is returned to the code that calls the function. - A name (in this case main), which identifies the function. - A parameter list (in this case empty), which specifies what data can be passed to the function by the code that calls the function. In this case the parameter list is empty. The parameter list is always enclosed in parentheses. std::cout << "Hello World!\n"; - This is the line where all the action of printing Hello World!to the screen is described. The line contains a single statement, which ends at ;. Let's examine what the statement does. std::coutis the name of special object that represents console output. <<is an operator (like a +or /operators), which instructs the compiler to generate code to print to std::cout(console) whatever comes after the <<operator. In this case this is the text "Hello world\n". The \npart is a special character that identifies a newline. Such characters are called escape sequences. - The std::prefix of std::coutindicates that the object comes from the standard library. It is possible to save some typing and use std::coutwithout std::, as cout, but only if using namespace std;is present somewhere above. We advise against this practice, because the standard library contains many simple names, such as std::count, std::list, etc. In more complex programs, some of your own variables might be named as countor list, which will lead to ambiguities and hard to debug bugs in your program. It's better to always use std::. Omitting it doesn't save a lot of time, since as the time goes, you'll learn to type std::very fast :) return 0; - This is the return statement. At this point, the mainfunction is terminated. 0is the return value. In the case of main, the return value indicates to the operating system, whether the program has finished successfully or not. Here we return 0, which indicates successful execution.
https://en.cppreference.com/book/intro/hello_world
CC-MAIN-2022-27
refinedweb
512
66.54
I'm trying to store the contents of a file containing names of herbs and spices (items) in a 2D array called PantryContents. Then I'm trying to add an item to that array. It seems like only one item is being stored at a time, but the output is correct but when I check the content of PantryContents, only one item shows up (like it's not 2D). This will be split into functions eventually.. (that's why the format is a little funky), I just needed to make it work all together first. Any ideas? This is what's in the file "Pantry.txt": -ground cumin -cayenne pepper -salt -pepper -dill -ground cinnamon -garlic powder #include <iostream> #include <fstream> using std::cin; using std::cout; using std::ifstream; int main() { const int size = 100; ifstream data_file ( "Pantry.txt" ); char items[size]; char ** PantryContents = nullptr; int ** frequencyArray = nullptr; int num_items = 0; if ( data_file.is_open () ) { while ( !data_file.eof() ) { data_file.getline( items, size ); //reads file num_items++; //cout << num_items << items << '\n'; //displays contents PantryContents = new char * [num_items + 1]; //increase array by one? for ( int i = 0 ; i < num_items; i++) { PantryContents[i] = &items[i]; //stick items in PantryContents array } PantryContents[num_items] = new char [strlen(items) + 1]; //new spot has enough space for the new items strcpy(PantryContents[num_items], items); //copy the items into the PantryContents num_items++; //increase the number cout << *PantryContents <<'\n'; } data_file.close(); } char ** temp = 0; char addition[255]; char confirm = 'Y'; int num = 0; while (confirm == 'Y' || confirm == 'y') { cout << "Please enter item: "; cin >> addition; PantryContents = new char * [num_items + 1]; for (int i = 0; i < num; i++) { PantryContents[i] = temp[i]; } PantryContents[num_items] = new char[strlen(addition) + 1]; strcpy(PantryContents[num_items], addition); num_items++; delete[] temp; temp = PantryContents; cout << "\nWould you like to enter another item? Y/N :"; cin >> confirm; for (int i = 0; i < num_items; i++) { cout << *PantryContents << '\n'; } } for (int i = 0; i < num_items; i++) { delete[] PantryContents[i]; } delete[] PantryContents; return 0; }
https://www.daniweb.com/programming/software-development/threads/477650/storing-contents-of-a-file-in-a-2d-array
CC-MAIN-2018-13
refinedweb
322
63.73