text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
There are some great Internet of Things hardware platforms, such as Raspberry Pi’s, ESP-32 based Arduino modules, Particle and Beaglebone to name just a few. Each of these systems have their strengths and weakness. Some are strong on the hardware side, like the Arduino modules, and others excel on the programming side like the Raspberry Pi. The Yún is somewhat unique in that it’s a module with two processors, one that supports standard Arduino programming and a second processor that supports Linux and the OpenWrt wireless stack. The Yún ($59) has an Arduino Uno form factor and there are clones like the LinkIt Smart 7688 Duo ($18) in an Arduino Nano form factor. In this blog I wanted to document some of key feature and functions that I worked through, namely: - moving file – scp and ftp - Python bridging to Arduino - uhttpd Web Server – with Python CGI - Yún REST API - MQTT - Yún mailbox Yún Overview The Yún, now in revision 2, seems to have been somewhat overlooked because of the all the low cost ESP-8266 and ESP32 based Arduino modules starting around $2. Some the interesting features of the Yún include: - Arduino code can launch and get feedback from Linux apps - A bridge library allows Linux programs and Arduino to share data - Arduino libraries for: - Web client - Web server - Mailbox to Linux - Yún has a lightweight web server, uhttpd, that can be used to Python, PHP, Lua CGI web programs - Yún has a read/write REST API web service Getting Started The Arduino side of the Yún is available “out of the box” like any other Arduino module when you connect a USB cable into the module. The Linux side however requires some configuration. Please see one of the guides on the Linux setup. Once your module is connected to your network, the Yun webserver can be used to configure features and add software components. Another option for loading software is to use the opkg package manager from an SSH connection. So for example to install nano, the simple text editor, enter: opkg install nano Moving File and Working in OpenWrt Linux The OpenWrt Linux isn’t loaded with with an X-windows environment and this means that you can not run idle or leafpad etc. to do easy editing of program files. Nano is a good command line text editor but it’s not the same as a windows based editor. To move files between a PC and OpenWrt you have some options. The two that I like are: - scp – secure copy, this is built in both OpenWrt and Microsoft Window 10 - ftp – file transfer protocol. Needs to be installed in OpenWrt. There are a few ftp servers that could be installed in OpenWrt, vsftp is a lightweight option and it can be installed by: opkg update opkg install vsftpd Once vsftpd is installed it needs to be enabled, this can be done from the command line or via the web interface. Yún Bridge The Yun Arduino bridge library allow variables to be passed between the Arduino code and a Python program. Below is an example that writes two random variables available, and it creates a “bridge1” variable that can be written from the Python side. // Simple Yun Bridge Example // #include <Bridge.h> // create a bridge variable to get remote data char bridge_Value[10]; void setup() { Bridge.begin(); // this launches /usr/bin/run-bride on Linino } void loop() { // create 2 random bridge values, that are sourced from the Arduino Bridge.put("random1", String(random(1, 100))); Bridge.put("random2", String(random(1, 100))); // Called the bridge value "bridge1". This is the name used on the Python side Bridge.get("bridge1", bridge_Value, 6); delay(1000); } An example to read/write in Python: #!/usr/bin/python import sys sys.path.insert(0, '/usr/lib/python2.7/bridge/') from bridgeclient import BridgeClient as bridgeclient value = bridgeclient() message = value.get("random1") # get a value from Arduino print "Random1: ", message value.put("bridge1","1111") # set a value to Arduino Yún uhttpd web server The uhttp web server is used for Yun setup and configuration. This web server can also be used to for custom static pages and user web apps. To view/modify the web server settings: nano /etc/config/uhttpd Within this config file, you can enable custom web applications by defining an interpreter: # List of extension->interpreter mappings. # Files with an associated interpreter can # be called outside of the CGI prefix and do # not need to be executable. # list interpreter ".php=/usr/bin/php-cgi" # list interpreter ".cgi=/usr/bin/perl" list interpreter ".py=/usr/bin/python # Lua url prefix and handler script. # Lua support is disabled if no prefix given. # option lua_prefix /luci # option lua_handler /usr/lib/lua/luci/sgi/uhttpd.lua The default directory for user programs is: /www/cgi-bin Python CGI – Get Values To read the Arduino bridge values in a Python CGI program, add a file to the /www/cgi-bin directory. For my example I called the file p1.py : #!/usr/bin/python import sys import cgi sys.path.insert(0, '/usr/lib/python2.7/bridge/') from bridgeclient import BridgeClient as bridgeclient value = bridgeclient() print "Content-type:text/html\r\n\r\n" print '<html>' print '<head>' print '<title>Python CGI Bridge</title>' print '</head>' print '<body>' print '<h2>Python CGI Bridge</h2>' print 'Random Value1: ' + value.get("random1") print 'Random Value2: ' + value.get("random2") print '</body>' print '</html>' To run the python script on the web page you will need to change the file right to executable: chmod +x p1.py You can debug and see output from the command line. root@yun1:/www/cgi-bin# ./p1.py Content-type:text/html <html> <head> <title>Python CGI Bridge</title> </head> <body> <h2>Python CGI Bridge</h2> Random Value1: 13 Random Value2: 24 </body> </html> If the output looks good, try the app from the Web page: Python CGI – Put Values There are a number of methods that can be used to send user input from a web page. A simple approach is to use a form. The form data can be read from the cgi.FieldStorage object, using a form.getvalue() call. #!/usr/bin/python # Import modules for CGI handling import cgi import sys sys.path.insert(0, '/usr/lib/python2.7/bridge/') from bridgeclient import BridgeClient as bridgeclient value = bridgeclient() # Create instance of FieldStorage form = cgi.FieldStorage() # Get data from fields bridge1 = form.getvalue('bridge1') if bridge1 != "" : value.put("bridge1",bridge1) print "Content-type:text/html\r\n\r\n" print """ <html> <head> <title>Python CGI - Put Bridge Value</title> </head> <body> <h2>Python CGI Form - Put Bridge Value</h2> <form action = "/cgi-bin/p2.py" method = "post"> Enter BRIDGE1 value: <input type = "text" name = "bridge1"><br /> <input type = "submit" value = "Submit" /> </form> </body> </html>""" The web page will call itself when the submit button is pressed. Yún REST API The Yún REST API is a web service that allow remote users and web browsers to view and set bridge values. I found the REST API to be a good tool for testing my Web CGI and Python applications To view all bridge variables enter: To get a specific bridge value enter: To put a bridge value enter: IoT Connections – MQTT For Internet of Things (IoT) projects you need to pass data from the Arduino and some server. The communications and the server would be something like MQTT or Redis. The Yún does not have direct access to the Wifi or ethernet port so the standard Arduino libraries for MQTT or Redis etc. will not work. An alternative approach is load the protocol’s command line client on the Linux side and then Arduino can shell out to the Linux tool. For example to load the Mosquitto MQTT command line tools: opkg update opkg install mosquitto-client To test MQTT publishing to a topic (mytag1) with a message of 249 on remote client (192.168.0.116) : mosquitto_pub -h 192.168.0.116 -u username -P password -t mytag1 -m 249 To remotely subscribe to mosquitto_sub -h 192.168.0.116 -u username -P password -t mytag1 An example of sending out four MQTT messages in Arduino: /* Shell out to pass values to a MQTT command line pub */ #include <Process.h> void setup() { Bridge.begin(); // Initialize the Bridge Serial.begin(9600); // Initialize the Serial // Wait until a Serial Monitor is connected. while (!SerialUSB); } void loop() { Process p; String thecmd; String strval = String( random(0,100)); // create a string with host, username and password, -t is for the topic String theparams = "-h 192.168.0.116 -u pete -P pete -t "; int numtopics = 4; String topics[4] = {"tag1","tag2","tag3","tag4"}; for (int i=0; i < numtopics ; i++) { strval = String( (i*100) + random(1,99)); // create a random value - 0,100+,200+,300+ thecmd = "mosquitto_pub " + theparams + topics[i] + " -m " + strval; Serial.println(thecmd); p.runShellCommand(thecmd); // do nothing until the process finishes, so you get the whole output: while (p.running()); } // Wait 5 seconds and repeat delay(5000); // wait 5 seconds before you do it again } It is also possible to create some bridge variables and pass them to a Python program that could do the MQTT communications. Yún Mailbox At present the mailbox only works in one direction, and this into the Arduino. Both the REST web interface and Python have writing functionality but no read capabilities (on the forums this has been identified, so a future revision may add this). To write using REST: To write using Python: import sys sys.path.insert(0, '/usr/lib/python2.7/bridge') from bridgeclient import BridgeClient client = BridgeClient() client.mailbox("my_message") The Arduino code to read message: // Mailbox Read Example // #include <Mailbox.h> void setup() { // Initialize Bridge and Mailbox Bridge.begin(); Mailbox.begin(); // Initialize Serial SerialUSB.begin(9600); // Wait until a Serial Monitor is connected. while (!SerialUSB); SerialUSB.println("Mailbox Read Message\n"); } void loop() { String message; // if there is a message in the Mailbox if (Mailbox.messageAvailable()) { // read all the messages present in the queue while (Mailbox.messageAvailable()) { Mailbox.readMessage(message); SerialUSB.println(message); } SerialUSB.println("Waiting 10 seconds before checking the Mailbox again"); } // wait 30 seconds delay(10000); } I’m not totally sure when I’d use a mailbox. The mailbox is a generic message and it’s not queued (only 1 message) so I think that using the standard bridging of values with get/put is more useful. Final Comments The Yun is missing a lot of networking libraries that are available with the ESP-8266 and ESP32 family of Arduino modules. In some cases like with MQTT that was a good Linux command line tool that could be used, but in cases like where you want to connect to a Redis server, you might have some challenges. I really liked Yun’s built in uhttpd web server, this is far superior to the ESP-8266 Arduino web server library calls. I found that putting logic into a combination of Python and Arduino could be a little confusing, but for projects with a lot of text or json data using Python would be a real plus. Also for projects where multitasking is required use the Linux process would be ideal.
https://funprojects.blog/2019/11/22/arduino-yun-for-iot/
CC-MAIN-2022-40
refinedweb
1,870
61.16
Trade-Seafood Industry Directory | Trade Seafood Directory | Companies by Species | Fish Directory | Seafood & Fish List | Companies by Country | ADD YOUR COMPANY | Sea-Ex is celebrating 21 YEARS of assisting Seafood & Marine Companies with online marketing! Get Your Highlighted Members Listing - Click for details Exporters of Shells | Importers of Shells | Processors of Shells | Wholesale Suppliers of Shells | Seafood Agents for Shells Murex shell, top shell, mussel shell, Conch shell, cowry shells, melon shell, Ark Shell, Sea Star, mother of pearl (MOP) shells, Oyster Shells etc. See Also: Abalone, Ark Shell, Clam, Conch, Mussel, Oyster, Paua, Top Shell SEA-EX MEMBERS WHO DEAL IN THIS PRODUCT ARE LISTED HERE Place your company details here - become a Member (from $75 a year) Click for Details Add your Seafood Company to the Directory... Sea Food Angels BAHAMAS - Producers, exporters and wholesalers of fresh Sea Urchins, Lobsters, Groupers, Snappers, Conch or sea shells... Kulukuriotis GREECE - Processors, Exporters and Wholesalers of mussel, mussel in sea bream, variety shell meat, shiny shell, cockle.... MR Sea Food Centre SRI LANKA - Exporters and wholesalers of sea cucumbers, sea shell, prawn, crab & dried fish PT Tobiko Utama INDONESIA - Processors and exporters of Flying Fish Roe (Tobikko), Seaweed, Fresh Fish, Seashells, Sea-cucumber & other marine products. Baabuod General Tradings YEMEN - We are seafood process, exporters, commission agent. MOP Shells & Meat, Grouper all types mostly black grouper, croaker, Barracuda, YFtuna, Long tail Tuna, skipjack, Trivially, Sea Bream, Indian mackerel, Bonito, queenfish etc De Oro Resources Inc. PHILIPPINES - We have processing plants in Hagonoy, Bulacan and Mactan Cebu. Producing Black Tiger Prawn, Penaeus monodon, Akegi (nylon shell), bay scallop, squid, cuttlefish and various fishes. Aqua Fresh Seafood PAKISTAN - Processors, exporters and wholesale suppliers. We are one of the experience seafood export company in Pakistan, we deal in all kind of Fish, Shrimps, Crabs, and Shellfish of Pakistan Origin. C-Quest Shells UNITED KINGDOM - Specialised in the food industry and catering market. C-quest supplies cleaned and graded crab shells to the English and French market for the use of filled crab shells. Nisha International Pte Ltd SINGAPORE - We are a reputed company involved in import and export of all types frozen and dried shark fins, sea cucumbers, fishmaws, abalone, operculum. | Cookie Policy | Disclaimer
http://www.trade-seafood.com/directory/seafood/shells-wholesalers.htm
CC-MAIN-2018-30
refinedweb
370
56.79
The Message Passing Interface (MPI) Standard defines a message passing library, which serves as the basis for many high-performance computing applications today. It provides portable scalable functions for data exchange in parallel computations for various parallel computing architectures. Originally application programing interfaces had been defined for C and Fortran as well as for C++. In the 2008 update, known as MPI-3, however, the C++ bindings have been removed from the MPI standard. During the various revisions the MPI standard became quite complex and dropping one of the three language bindings may have helped to keep the standard maintainable as a whole. Furthermore, the C++ bindings were not very well designed. Although object orientated techniques had been applied, the MPI C++ bindings did not come close to a well designed C++ library by todays standards. explains in more detail, what happened to the C++ bindings in his blog. Alternative C++ bindings to the MPI Standard are provided by Boost MPI and OOMPI, which was an early attempt to bring MPI 1 functionality to C++ in an object orientated way. Boost MPI uses rather modern C++ programing techniques to provide a very nice interface to the MPI standard’s core functionality. With Boost MPI programs become more type save (When sending data of a particular C++ type, the corresponding MPI data type is deduced by the compiler.) and sending data given by user defined structures or classes becomes much more easy than with the MPI standard’s C or C++ bindings. Although Boost MPI is a huge improvement over the deprecated C++ bindings of the MPI standard it has also its limitations. - It is no longer actively maintained. - Sending data of complex classes and structures is based on Boost serialization, which may cause performance reductions and does not work in heterogeneous environments (different endians etc.). - Boost MPI provides no equivalent to derived MPI data types (strided vectors, sub matrices, etc.). - Although Boost MPI supports the more general graph communicators, there are no functions for Cartesian communicators. - Boost MPI ist based on C++03, it does not benefit from new C++11 features. Because C++ was dropped from the MPI standard and because Boost MPI does not fulfill all my needs for a flexible easy-to-use C++ message passing library I started to write my own massage passing library on top of MPI, just called MPL (Message Passing Library), see my GitHub account. Note that MPL will neither bring all functions of the C language API to C++ nor provide a direct mapping of the C API to some C++ functions and classes. Its focus is on the MPI core functions, ease of use, type safety, and elegance. It uses C++11 features, wherever reasonable, e.g., lambda functions as custom reduction operations. MPL relies heavily on templates and template meta programming and it comes just as a bunch of header files. Documentation is still missing and only available in form as the source code and a few sample programs. If you are familiar with MPI, however, the transition to MPL will not be difficult. Let us start with a hello-world type program: #include <cstdlib> #include <iostream> #include <mpl/mpl.hpp> int main() { const mpl::communicator & comm_world(mpl::environment::comm_world()); std::cout << "Hello world! I am running on \"" << mpl::environment::processor_name() << "\". My rank is " << comm_world.rank() << " out of " << comm_world.size() << " processes.\n"; return EXIT_SUCCESS; } Similar to MPI_COMM_WORLD in MPI, MPL has a global communicator that contains all processes, which belong to a parallel computation. Each communicator has a rank (the number of the process within a communicator) and a size (the total number of processes). The program shown above just prints for each process its rank, the size of the world communicator and the computer’s name, where the process runs. Note that with MPL it is not required to initialize or to finalize the massage passing library (MPI_Init and MPI_Finalize are called implicitly by some compiler magic.). Let us look at a less trivial example and see how messages are send and received. A very elementary example using the C language bindings of MPI and C++11 may look like this: #include <cstdlib> #include <complex> #include <iostream> #include <mpi.h> int main(int argc, char *argv[]) { MPI_Init(&argc, &argv); int c_size, c_rank; MPI_Comm_rank(MPI_COMM_WORLD, &c_rank); MPI_Comm_size(MPI_COMM_WORLD, &c_size); if (c_size<2) { MPI_Finalize(); return EXIT_FAILURE; } // send and recieve a single floating point number if (c_rank==0) { double pi=3.14; MPI_Send(&pi, // pointer to memory 1, // number of data items MPI_DOUBLE, // data type 1, // destination 0, // tag MPI_COMM_WORLD // communicator ); std::cout << "sent: " << pi << '\n'; } else if (c_rank==1) { double pi=0; MPI_Recv(&pi, // pointer to memory 1, // number of data items MPI_DOUBLE, // data type 0, // source 0, // tag MPI_COMM_WORLD, // communicator MPI_STATUS_IGNORE // ignore the receive status ); std::cout << "got : " << pi << '\n'; } MPI_Finalize(); return EXIT_SUCCESS; } Here the standard MPI functions MPI_Send and MPI_Recv are employed. The function signature requires a lot of parameters: a pointer to a buffer, the number of items to be send or received, the data type, a source or destination, a tag, and finally the communicator. With MPL this simplifies a lot. MPL assumes that only one data item is send or received at a time, thus the number of data items is not needed to be specified. Furthermore, the underling MPI datatype will be deduced automatically at compile time by the compiler. This eliminates a typical error of MPI programs, e.g., passing a pointer to a do an integer by specifying MPI_DOUBLE as the data type. The tag, which may be used to distinguish between different kind of messages, becomes in MPL an argument with a default value, so it is optional. Thus, in MPL only the communicator, a reference to the data and a source or destination has to be given to the send and receive functions. The MPL equivalent to the MPI program shown above may look as: #include <cstdlib> #include <complex> #include <iostream> #include <mpl/mpl.hpp> int main() { const mpl::communicator &comm_world=mpl::environment::comm_world(); if (comm_world.size()<2) return EXIT_FAILURE; // send and recieve a single floating point number if (comm_world.rank()==0) { double pi=3.14; comm_world.send(pi, 1); // send to rank 1 std::cout << "sent: " << pi << '\n'; } else if (comm_world.rank()==1) { double pi=0; comm_world.recv(pi, 0); // receive from rank 0 std::cout << "got : " << pi << '\n'; } return EXIT_SUCCESS; } Of course sending and receiving single data items will not be sufficient for a message passing library. This is why MPL introduces the concept of data layouts. Data layouts specify the memory layout of a set of data to be sent or received (similar to derived data types in MPI). The layout may be continuous, a strided vector etc. The data layout is provided as an additional parameter to sending or receiving functions and, in contrast to the case of single data items, data is passed via a pointer. The following example may give an idea how data layouts are used with MPL: #include <cstdlib> #include <complex> #include <iostream> #include <vector> #include <mpl/mpl.hpp> int main() { const mpl::communicator &comm_world=mpl::environment::comm_world(); if (comm_world.size()<2) return EXIT_FAILURE; std::vector<double> v(8); mpl::contiguous_layout<double> v_layout(v.size()); // send and recieve a vector of floating point numbers if (comm_world.rank()==0) { double init=0; for (double &x : v) { x=init; ++init; } comm_world.send(v.data(), v_layout, 1); // send to rank 1 std::cout << "sent: "; for (double &x : v) std::cout << x << ' '; std::cout << '\n'; } else if (comm_world.rank()==1) { comm_world.recv(v.data(), v_layout, 0); // receive from rank 0 std::cout << "got : "; for (double &x : v) std::cout << x << ' '; std::cout << '\n'; } return EXIT_SUCCESS; } Addendum: Besides MPL, Boost MPI and OOMPI there is MPP, which is a further library that attempts to bring MPI to modern C++. One thought on “MPL – A message passing library” Hi, I find your library on github. I really like the idea of distributed_grid. I wonder if this library supports CUDA. Addtionally, it seems the data is owned by the distributed_grid. is it possible to make it “mdspan” like? This way would be better to mannual manage memory, especially between device and host memories.
https://www.numbercrunch.de/blog/2015/08/mpl-a-message-passing-library/
CC-MAIN-2021-04
refinedweb
1,364
53.31
My system is composed of a L298 (H-bridge to generate de square wave signal for the J1772) and a ADS1115 (an ADC to read the high level of the square wave signal). My problem start when I execute the code and I don't get a frequency of 1 kHz for the square wave signal. This is due to the fact that the part of the code that is constantly running I have verified that it takes much longer to execute than I need, especially in the conditioning part (about 0.05s, when I need that the entire program should be done in 0.001s). Am I looking for something impossible with my project? Do you have any idea of what I'm failing? Is it a lack of code optimization? Could I do it better in another way? Here I leave the code explained as best as possible: Thanks for your time. Code: Select all import RPi.GPIO as GPIO import board import busio import adafruit_ads1x15.ads1115 as ADS from adafruit_ads1x15.analog_in import AnalogIn from time import * # GPIO GPIO.setwarnings(False) GPIO.setmode(GPIO.BCM) # LED/RELE LED=22 GPIO.setup(LED, GPIO.OUT) GPIO.output(LED, False) # Permiso APP APP=23 GPIO.setup(APP, GPIO.OUT) GPIO.output(APP, False) # Motor A INA_1=17 INA_2=18 GPIO.setup(INA_1, GPIO.OUT) GPIO.setup(INA_2, GPIO.OUT) GPIO.output(INA_1, False) GPIO.output(INA_2, False) # Create the I2C bus i2c = busio.I2C(board.SCL, board.SDA) # Create the ADC object using the I2C bus ads = ADS.ADS1115(i2c) # Create single-ended input on channel 0 chan = AnalogIn(ads, ADS.P0) # Here it is my problem try: while True: GPIO.output(INA_1, True) #I put up the square wave signal with the H-bridge if float(chan.voltage) > 3.58: #My ADC doesn't read +4V, so I've scaled the values GPIO.output(LED, False) print(">>>>> STATE_A - Not Connected <<<<<") else: if (2.56 < float(chan.voltage) <= 3.58): GPIO.output(LED, False) print(">>>>> STATE_B - EV Connected (Ready) <<<<<") elif (1.54 < float(chan.voltage) <= 2.56) and (GPIO.input(APP)==True): GPIO.output(LED, True) print(">>>>> STATE_C - EV Charge <<<<<") elif (0.51 < float(chan.voltage) <= 1.54) and (GPIO.input(APP)==True): GPIO.output(LED, True) print(">>>>> STATE_D - EV Charge (Vent. Req.) <<<<<") else: GPIO.output(LED, False) print(">>>>> STATE_E - ERROR <<<<<") sleep(0.0002) #This sleep keeps the signal that I put on high 25% of the time of the frequency that I need to control the amperage through the duty cycle) GPIO.output(INA_1, False) GPIO.output(INA_2, True) #I put the square wave signal at -12V using the H-Bridge sleep(0.0008) #I keep the signal at -12V the rest of the time for my 1 kHz frequency GPIO.output(INA_2, False) GPIO.output(INA_1, True) # CTRL+C to stop except KeyboardInterrupt: GPIO.output(INA_1, False) GPIO.output(INA_2, False) GPIO.output(LED, False) GPIO.output(APP, False) pass señal.stop() GPIO.cleanup()
https://www.raspberrypi.org/forums/viewtopic.php?f=32&t=242651&p=1480282
CC-MAIN-2019-30
refinedweb
496
70.09
I will try to be as thorough as possible in describing the problem here, but the task at hand is a peculiar one so I may miss something. let me know if it is unclear. So, I am working on a CFD code. I am still new to programming, so the part I have been given to work on is relatively easy: Given a field of points, I am supposed to find out which correspond to the object in the center of the field. For clarity, here's what that looks like in an x-y plot: Example. Now, the program I am working with spits out these points as a series of triangles. The points in the field are numbered from 1 to N, and each triangle is presented as a list of three of these points. I first divide these triangles into three edges apiece. To find out where the object in the center lies, in this case a wing, I need to find the edges that don't repeat, as the only places where triangles aren't touching each other is the surface of the wing. The first code I wrote to do this worked just fine, but it was naive and slow: it compared every single edge to every other edge to find matches. That's fine on a small scale, but ultimately I'll be working with data that has hundreds of thousands of points. So, I've been working on an algorithm to make things quicker..Code:#include <iostream> #include <fstream> #include <string> #include <math.h> #include <stdlib.h> #include <vector> using namespace std; int main() { //I've gotten rid of the code that reads the file and converts that data //into edges, to make this easier to read. I know that part works fine. //E = Number of triangles //N = Number of points vector<int> Newalg (9*E,0); //3 edges per triangle * 3 spaces per edge (two for endpoints, one to mark duplicates) vector<int> Newcheck (N,0); int temp1 = 0; int temp2 = 0; //write all edges to newalg: for (int i=0;i<3*E;i++) { Newalg[3*i] = Edges[2*i]; Newalg[3*i+1] = Edges[2*i+1]; temp1 = Edges[2*i] - 1; temp2 = Edges[2*i+1] - 1; Newcheck[temp1] += 1; Newcheck[temp2] += 1; }; //find max instances of any one point int inst = 0; for (int i=0;i<N;i++) { if (Newcheck[i] > inst) { inst = Newcheck[i]; }; }; vector<int> Instances (inst*N,0); //This allows one to compare edges only with those edges in which the same point apppears temp1 = 0; temp2 = 0; for (int i=0;i<N;i++) { Newcheck[i] = 0; //resets vector for use in next loop }; int temp3 = 0; //placeholder value for stacking vectors for (int i=0;i<3*E;i++) { temp1 = Newalg[3*i]; temp2 = Newalg[3*i+1]; //check against all other cases found thus far for (int j=0;j<inst;j++) { if (Instances[j] == 0) { continue; }; temp3 = Instances[inst*(temp1-1)+j] - 1; if ((Newalg[3*temp3] == Newalg[3*i]) && (Newalg[3*temp3+1] == Newalg[3*i+1])) { Newalg[3*i+2] = 1; Newalg[3*temp3+2] = 1; }; temp3 = Instances[inst*(temp2-1)+j] - 1; if ((Newalg[3*temp3] == Newalg[3*i]) && (Newalg[3*temp3+1] == Newalg[3*i+1])) { Newalg[3*i+2] = 1; Newalg[3*temp3+2] = 1; }; }; //first endpoint write Newcheck[temp1-1] += 1; //-1 relates to fact that vector starts at 0 temp3 = Newcheck[temp1-1]; Instances[inst*(temp1-1)+(temp3-1)] = i+1; //second endpoint write Newcheck[temp2-1] += 1; temp3 = Newcheck[temp2-1]; Instances[inst*(temp2-1)+(temp3-1)] = i+1; }; //eliminate all rows with "1" in third column temp3 = 0; for (int i=0;i<3*E;i++) { if (Newalg[3*i+2] > 0) { Newalg[3*i] = 0; Newalg[3*i+1] = 0; Newalg[3*i+2] = 0; temp3++; }; }; cout<<"Eliminates "<<temp3<<" edges"<<endl; }; I know from the naive original program that I should get 80 edges on the wing. This code produces 20658. I've spent probably 50 hours trying to figure out where the problem is, and haven't had any luck. If you can spot my error, I will be much obliged, as this is starting to gnaw on my sanity.
http://cboard.cprogramming.com/cplusplus-programming/140150-trouble-algorithm-find-edges.html
CC-MAIN-2013-48
refinedweb
712
59.77
Refactor api v2 schemas In the glance v2 api, schemas are used to communicate the expected format and attributes of image objects, access record objects, and image tags. These schemas are specific to v2. They are not used by v1 and would likely be changed in any future version. However, they presently live in project-global namespace. And they are served up by a single monolithic API object which is therefore required to know about all schemas. This blueprint proposes to refactor schemas to individual objects and move them under the glance/api/v2 library umbrella. To make room, other libraries in glance.api.v2 may need to be moved around as well.: Move the particulars of v2 schemas under v2 Gerrit topic: Gerrit topic: Gerrit topic: Addressed by: Add image access records schema for image resources Dependency tree * Blueprints in grey have been implemented.
https://blueprints.launchpad.net/glance/+spec/api-v2-refactor-schemas
CC-MAIN-2022-21
refinedweb
145
64.1
! Evan Caballero Ranch Hand 60 2 Threads 0 Cows since Dec 10, 2009 Merit Badge info Cows and Likes Cows Total received 0 In last 30 days 0 Total given 0 Likes Total received 1 Received in last 30 days 0 Total given 0 Given in last 30 days 0 Forums and Threads Scavenger Hunt Ranch Hand Scavenger Hunt Number Posts (60/100) Number Threads Started (2 (60/10) Number Threads Started (2/10) Number Likes Received (1/3) Number Likes Granted (0/3) Set bumper stickers in profile (0/1) Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Recent posts by Evan Caballero My first app on android market - aTrip Hi there, I'm proud to present you my first application : aTrip ! ! show more 10 years ago Blatant Advertising MVN build error : could not copy webapp classes Try to "uncheck" the "Resolve Workspace Artifacts" checkbox in the maven run configuration under eclipse (assuming that you use the maven eclipse plugin) show more 10 years ago Other Build Tools how to focus input field in jsp without using javascript there is no alternative. you have to you javascript show more 10 years ago HTML Pages with CSS and JavaScript Managing Properties files If you don't want to use env variables, symbolic links under unix can do the trick. show more 10 years ago Other Build Tools Managing Properties files you can use an environment variable for that. Imagine that your property files are in the /var/myapp/cfg directory on your system. Instead of harcoding this path in your app, declare an env variable like : MYAPP_CFG_PATH=/var/myapp/cfg To retrieve this value, use System.getEnv("MYAPP_CFG_PATH"); Then, if you have different configuration files for different environments (dev, prod etc ...) you can use a trick like this one : declare an env variable for each environment -> MYAPP_PROD_CFG_PATH = /var/myapp/prod/cfg -> MYAPP_DEV_CFG_PATH = /var/myapp/dev/cfg and, before you deploy your application, just switch from one to the other like this : export MYAPP_CFG_PATH=$MYAPP_PROD_CFG_PATH or export MYAPP_CFG_PATH=$MYAPP_DEV_CFG_PATH This technique is great because it also works on windows. Instead of /var/myapp/prod, you can put D:\var\myapp\prod in the env variable. Remember, on windows, env variable values are accessible using %MYAPP_PROD_CFG_PATH% instead of $MYAPP_PROD_CFG_PATH on unix. But from the java side, it's the same way : System.getEnv("MYAPP_CFG_PATH"); NB : this also works for the JAVA_HOME variable, if you use your server to deploy apps with JDK 1.5 and JDK 1.6 declare env like these : JDK5=/opt/java/jdk5u17 JDK6=/opt/java/jdk6u22 JAVA_HOME=$JDK6 or JAVA_HOME=$JDK5 With all these stuffs, the only thing that is hardcoded in your application, is the name of the env variable. I hope this will help you ;) show more 10 years ago Other Build Tools Why we need constructor in Abstract Class ?? an abstract class can contains attributes that will be inherited by all its sub classes for example. So, declaring a constructor enables you to initialise all these attributes in the super class, and not for each constructors of your sub classes. public class MyAbstractClass { protected int att1; protected String att2; public MyAbstractClass(int a1, String a2){ this.att1 = a1; this.att2 = a2; } } // sub class public class MyConcreteClass extends MyAbstractClass { private Object att3; public MyConcreteClass(int a1, String a2, Object a3){ // call the MyAbstractClass constructor super(a1, a2); this.att3 = a3; } } show more 10 years ago Java in General doubt in List.indexOf() In fact, when you do i.asList(), it returns a list of int[] (List<int[]>) with one element :/ like [{ 1, 2, 3, 4, 5, 6, 7, 8, 9 }] so, there is no element at index 4 ; there is only an element at index 0, which is your original int[]. the asList method considers that you want to build a list from one int[], which IS an object. It is like if you were doing Arrays.asList(i1, i2, i3, i4), where i1 i2 i3 and i4 are int[] objects. show more 10 years ago Java in General help needed in copying an Array I made a few changes to your code, but I'm not sure this is what you want : public class Test { public static void main(String args[]) { Object[] obj = null; Test test = new Test(); obj = test.getData(); System.out.println(Arrays.asList(obj)); } public Object[] getData() { ArrayList list = new ArrayList(); list.add("One"); list.add("two"); return list.toArray(); } } show more 10 years ago Beginning Java what is an another way of getting the form parametrs from jsps? And so, what is the final answer to the original question ? show more 10 years ago Servlets doubt in List.indexOf() here it is : public static void main(String[] args) { List list; int length; int length2; Integer[] i = { 1, 2, 3, 4, 5, 6, 7, 8, 9 }; list = Arrays.asList(i); length = list.indexOf(new Integer(4)); length2 = list.indexOf(4); System.out.println(length); System.out.println(length2); } I think that when the JVM autoboxes int to Integer, the Integer.equals() method is ignored or something like that. show more 10 years ago Java in General Collection class to be used when behavior needed as LIFO to use a LinkedList as a LIFO queue, call addFirst and removeFirst methods only ;) show more 10 years ago Beginning Java Collection class to be used when behavior needed as LIFO the LinkedList implementation of Collection has methods addFirst, addLast, removeFirst and removeLast. I think you can use it to implement a LIFO queue. show more 10 years ago Beginning Java Freemarker loading templates from resources if you try to load a file that is actually in the classpath of your application, for example in the classes directory, you can use Thread.currentThread().getClass().getResourceAsStream("path/to/your/file/in/classpath"); or, if you don't want to have a headache with paths under different OSs, you can get the absolute path, plateform dependent, this way : new File("/some/path").getAbsolutePath(). 3rd trick, the static attribute File.pathSeparator varies if you are under unix, windows etc ... Under unix, it is initialized as "/" at the start of the JVM, as "\" under windows. Hope this should help you ;) show more 10 years ago I/O and Streams Exception errror in a program perhaps there is a mistakes here : fis = new FileInputStream(args[0]); If you do not pass any arguments to your program, the args array will be of length 0, so there won't be any element à index 0. show more 10 years ago Java in General Set a property file to the classpath programmatically perhaps you could use a -Djava.something parameter while lauching your application. show more 10 years ago Java in General
https://www.coderanch.com/u/219240/Evan-Caballero
CC-MAIN-2020-40
refinedweb
1,134
58.11
- Author: - jezdez - Posted: - January 31, 2008 - Language: - Python - Version: - .96 - settings import reusable apps applications - Score: - 3 (after 5 ratings) Use this snippet at the end of your main settings.py file to automagically import the settings defined in each app of INSTALLED_APPS that begins with APPS_BASE_NAME. Set APPS_BASE_NAME to the base name of your Django project (e.g. the parent directory) and put settings.py files in every app directory (next to the models.py file) you want to have local settings in. # works in the Django shell >>> from django.conf import settings >>> settings.TEST_SETTING_FROM_APP "this is great for reusable apps" Please keep in mind that the imported settings will overwrite the already given and preceding settings, e.g. when you use the same setting name in different applications. Props to bartTC for the idea. from django.conf import settings TEST_SETTING = getattr(settings, "TEST_SETTING", "default value")</pre> # Please login first before commenting.
https://djangosnippets.org/snippets/573/
CC-MAIN-2017-04
refinedweb
153
60.31
What to Expect from Yii 2.0By Arno Slatius. A tiny bit of history The first version of Yii became popular quite fast after it was released in 2008. It’s founder, Qiang Xue, previously worked on the Prado framework and used experience and feedback from that to build Yii. Yii uses many ideas from other frameworks, languages and libraries: Prado, Ruby, jQuery, Symfony and Joomla are all acknowledged as sources of inspiration. The first commits for Yii 2.0 date back to 2011 but the development picked up last year. The team did a rewrite with an aim to become the state of the art new generation PHP framework. It adopts the latest technologies and features, such as Composer, PSR, namespaces, traits, and more. Something worth mentioning is that according to the download page Yii version 1.1 support will end on December 31, 2015, so we do get some time to start thinking about making the transition. Requirements. Installation Yii is now installable from Composer. We’ll go through this installation method soon. Currently, there are two application examples available. There is a basic example containing a few pages, a contact page and a login page. The advanced example adds a separate front and backend, database interaction, signup and password recovery. Getting started I’ll start with the basic example. If you’ve looked at Yii before, you’ll recognize the same basic webapp that Yii 1.1 came with. Install the basic example with Composer using the following command: composer.phar create-project --prefer-dist --stability=dev yiisoft/yii2-app-basic You can then check if your server meets the requirements by opening up. The actual application will then run from. This is the first important thing to notice: the idea is that you set the document root of your application to the /path/to/application/web, much like with Symfony. The directory layout changed a bit from version 1.1. If you look closely, the change makes sense and will improve the security of your application. Previously, all the application components (models, views, controllers, framework and vendor libraries) would live under the document root in the protected folder. That way the security depended on .htaccess files to be respected, which meant your application was 100% insecure by default on Nginx. Moving all the application components away from the document root prevents the web server from sending your application components to a user. You might find yourself looking for the actual framework sources. The framework is a component that was installed using Composer, so it’ll reside under the vendor\yiisoft\yii directory. Here you’ll find a lot more, but for now, we’ll just leave it at that. For now, let’s change the local web server configuration and set the document root to /path/to/application/web. I added a virtualhost, but do as you see fit for your own situation. The default configuration is set to hide the script file in the URL. If you’re using Apache, you’ll need to add an .htaccess file to the web directory to instruct Apache to do rewriting, it’s not there by default. RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . index.php A look at the basic Yii application Now that we have the basic application running, some congratulations are in order… Thanks! No rocket science so far. You’ll start with start page, a static about page, a contact page and a login page. The contact page and login form have the same functionality available as before; captcha code, form validation and two users available for logging in. Logging in does the same as before; close to nothing. Still, it is a good start. The design of the basic application changed dramatically. Previously you’d get an application built on the Blueprint CSS framework whereas now we start off with Twitter Bootstrap. Improvement? It probably is compared to Blueprint, but then again Bootstrap is a lot more than Blueprint ever tried to be. Bootstrap will give you all sorts of application components and will speed up building an application. Some might argue on the other hand that all sites look the same with Bootstrap (themes only fix this partially) and it will also make your site larger size-wise. Either way, the integration with Yii 2.0 is done with the yii2-bootstrap extension. This makes it very easy to integrate the Bootstrap components in your views. Another thing you’ll notice is the debug bar at the bottom. It is installed and activated by default, just like in Symfony. It allows for quick access to loads of information about your configuration, requests and application logging. It’ll keep a history of requests with debug information as well. Yii handles errors different than PHP normally would. Yii converts all errors (fatal and non-fatal) to exceptions. Those are handled by rendering an insightful output pointing you towards the point where you messed up or your code generated a notice. Even parse errors, for which Yii 1.1 would fall back to the basic PHP errors, get a nicely rendered overview of your code. This is something most of us will appreciate. Gii is also present again and activated by default. Gii will help you by generating code for you to start with, another great tool to help speed up your development. It will generate models and controllers for you. The CRUD generator will go one step further and generate a complete MVC set for all the actions. Gii will also generate code better suited for Internationalization (i18n) by immediately inserting the Yii::t() function where you’ll need it. The basic application now also comes with a simple command line application which you can build upon. Yii 1.1 already supported this but you’d have to get an example from the Wiki. That’s what you’ll find in the basic application. There is also an advanced application example available. It has a somewhat different structure but adds even more functionality to your application out of the box: - User authorization, authentication and password restore. - An application split into a front and backend. Continuing the look at the basic version, let’s take a closer look and dive into the code… What changed? A lot has changed. Some changes might confuse you at first, but I find most changes make sense and are easy to accept. Here are some of the changes that I found interesting, fun or puzzling. The PHP 5.4 requirement made some changes possible; the array short tags are available. It’s also safe to use the echo short tags in views because that doesn’t depend on configuration settings anymore. <?php $elements = array(1,2,3,4); //Yii 1.1 $elements = [1,2,3,4]; //Yii 2.0 ?> <?php echo $someVar; ?> //Yii 1.1 <?= $someVar ?> //always safe to use in Yii 2.0 A small change, but one you’ll run into fast; before, you’d use Yii::app() to access the application instance and it’s components. In Yii 2.0 this changed from a static function to a static variable Yii::$app. The translate function Yii::t() is still with us. It instructs Yii to use the i18n component to translate the supplied text to the current language used. You can also instruct it to substitute variables. <?php echo `Yii::t('app', 'Hello, {username}!', [ 'username' => $username, ]); ?> The placeholder formatting and styling has been seriously reworked allowing for more formatting options. Some examples: <?php echo \Yii::t('app', '{n, number} is spelled as {n, spellout}', ['n' => 81]); echo \Yii::t('app', 'You are {n, ordinal} in line, please hold.', ['n' => 3]); //Will echo "You are 3rd in line, please wait.". echo \Yii::t('app', 'There {n, plural, =0{are no cats} =1{is one cat} other{are # cats}}!', array( 'n' => 14, )); ?> Because of this placeholder formatting, the DateTimeFormatter is gone: <?php //Previously in Yii 1.1 Yii::app()->dateFormatter->formatDateTime(time(), 'medium', 'medium'); //In Yii 2.0 echo \Yii::t('app', 'The date is {0, date, short}', time()); //uses the pre-defined 'short' notation (i18n save) echo \Yii::t('app', 'The date is {0, date, YYYY-MM-dd}', time()); //or define your own notation ?> This functionality is supplied by the ICU library. The Yii documentation calls the original documentation for this: “quite cryptic”. I dare you to read it and try to understand it… Let’s hope the Yii documentation includes a more readable version in time. Controllers Before, accessControl() would be a function of your controller if you wanted to use the Yii access control functionality. With Yii 2.0, access control is part of the controllers behavior(): <?php public function behaviors() { return [ 'access' => [ 'class' => AccessControl::className(), 'only' => ['logout','login','signup'], 'rules' => [ [ 'allow' => true, 'actions' => ['logout'], 'roles' => ['@'], ], [ 'allow' => true, 'actions' => ['login', 'signup'], 'roles' => ['?'], ], ], ], ]; } ?> This is almost identical to the way it was in Yii 1.1. I did notice that the example code (not the framework itself!) is missing many docblocks and has a lot of @inheritdoc comments. This isn’t what you’d expect from an example but I assume that this will be fixed in time. Models The basic model (previously CModel) didn’t change much. Scenarios now allow you to change the enforcement of validation rules. You can change what needs to be validated based on your current scenario (i.e. a model with different rules when used from a front or backend). The derived ActiveRecord underwent some serious changes, though. The syntax for searching with ActiveRecord became more like writing queries because CDbCriteria is gone. It has been replaced by ActiveQuery making retrieving information easier: <?php $authors = Authors::find() ->where(['sitepointChannel' => $channel]) ->orderBy('lastName') ->all(); ?> Relations definition also changed dramatically. Lets take for example a site with bloggers that post articles on which users comment. The relations definitions for the authors table is described below. I’ll start with how it looked in Yii 1.1: <?php //Define the relations public function relations() { return array( 'posts' => array(self::HAS_MANY, 'Posts', 'authorID'), 'comments' => array(self::HAS_MANY, 'Comments', array('ID'=>'PostID'), 'through'=>'posts'), ); } //Querying an author with posts and comments $activity = Author::model()->with(array('posts','comments')->find('fullname = "Arno Slatius"'); $posts = $activity->posts; $comments = $activity->comments; ?> As you can see, you’d define all the relations of an Active record in a large array. In Yii 2.0 you’ll have to define getter methods that return an ActiveQuery object for all those relations. You’d have to use the keyword ‘through’ in a relation to define a relation between an intermediary table. You now have two options to define this; normally you’d use the via() method in a relation function. You can also define the relation using the viaTable() method if you only need the data in the table after the pivot table. Same example as above but now for Yii 2.0: <?php //Define relations by creating getter functions public function getPosts() { return $this->hasMany(Posts::className(), ['authorID' => 'ID']); } public function getComments() { return $this->hasMany(Comments::className(), ['ID' => 'PostID']) ->via('Posts'); } //If you'd only need comments you'd define it at once: public function getComments() { return $this->hasMany(Comments::className(), ['ID' => 'PostID']) ->viaTable(Posts::className(), ['authorID' => 'ID']); } //Querying an author with posts and comments $activity= Authors::findOne(['fullname' => 'Arno Slatius']); $posts = $activity->posts; $comments = $activity->comments; ?> This is a rather simple example. Defining the relations through the getter functions that return ActiveQuery objects allows for much more. You can, for instance, add a specific function that does a query for posts that get >50 comments by adding a where() call in the returned ActiveQuery. An interesting addition is the possibility to define cross DBMS relations. You can define relations between for instance MySQL and MongoDB or Redis and use them in your application as one object. Views The main thing to note in views is that $this doesn’t refer to the controller instance anymore. In a view, $this is an instance of the yii\web\View object. The controller is accessible through $this->context. As I said before; PHP 5.4 makes the short echo tag consistently available. This makes the views which consist of mixed PHP and HTML more readable; <h1><?= Html::encode($this->title) ?></h1> The render() and renderPartial() functions changed as well. Before it would echo the rendered output automatically and you’d have to add an additional parameter to get the rendered output as a string. Yii 2.0 will always return a string on render()-like calls making it more consistent with the way widgets behave. Upgrading from Yii 1.1 Should you consider upgrading your Yii 1.1 application to Yii 2.0 in time? Bruno Škvorc recently wrote about legacy code here on SitePoint. He argues that a rewrite that can be done in 2 months should be considered – especially if the software you’re using is business critical. I agree with him and would suggest you consider it if you feel seriously about your application and want to maintain it beyond the end of life of Yii 1.1. But as always; it depends on your situation. There’s a special page dedicated to upgrading Yii on the Yii website. The biggest problem, for now, are your extensions. If you rely on a lot of extensions, you’ll have a hard time because it’ll take some time for the community to take up (re)writing the extensions for Yii 2.0. If you’re a real pro, you could of course take a serious look at the extensions you’re using and consider (re)writing them. The migration manual has a section on running Yii 1.1 and Yii 2.0 together in an application. For large projects this is a good way to create a safe migration path. Migrate your generic code to Yii 2.0 and take your time on the more complex or extension filled parts. Conclusion Going over the The Definitive Guide to Yii 2.0 gets me more and more enthusiastic to get started with Yii 2.0. I already had to stop myself from using it in a new project because I couldn’t risk problems with pre-production code.! - masudianpour - Arno Slatius - dojoVader - Arno Slatius - Syed Zaeem ud Din - pentium10 - Syed Zaeem ud Din - Said Bakr - Loganatan - Zein Miftah - darkheir - Vincent - Michel - Carlos Mathers - Ahmed Mohamed - R22 - Bruno Skvorc - jovani - igorsantos07 - Arno Slatius - igorsantos07 - Arno Slatius - Jasper van der Hoeven - Arno Slatius - Jasper van der Hoeven - Arno Slatius - Jasper van der Hoeven - Arno Slatius - Jasper van der Hoeven - Arno Slatius - gondoSVK - jennifermorrison
https://www.sitepoint.com/expect-yii-2-0/
CC-MAIN-2016-50
refinedweb
2,442
57.87
A. If you have used Visual Studio for a while, you have probably run across some of the standard code snippets - you may even be using them intentionally all the time. Today we are going to look at first how to use snippets in your code (really easy to do), and then we are going to take a look at how you can write your own code snippets. It is in writing your own code snippets that you really start to see the power of the code snippet system, and hopefully after today's tutorial you can go off and streamline your own work flow by writing a couple snippets for yourself. So first, lets take a look at adding existing snippets to code. There are a number of ways to do this in Visual Studio, the most common of which is to use regular intellisense. For instance, below we are about to add a for loop snippet: ![ Intellisense Dropdown]() ![ Visual Studio Context You can also get to a menu of just snippets by using the right click menu and choosing the option "Insert Snippet". And, of course, if the mouse is too much work for you, there is a keyboard shortcut that will bring up this menu directly - ctrl-k + ctrl-x. Below, you can see the menu in action as we choose again to insert the for loop snippet. ![ Intellisense For Snippets]() You might be wondering "why would I want to use a for loop snippet?" - cause after all, a for loop it really isn't that complex to write. Well, what it does is give you the full for loop framework with only a couple keystrokes. This is what inserting the for loop snippet will give you: ![ Example Snippet Insert]() If your not convinced that saved you that many keystrokes, theres another handy feature. See how the first "i" is highlighted and the second two are surrounded by dotted lines? That means that those variables are supposed to be the same - which mean that if you edit the first "i" right after you insert the code snippet, the other "i"s change to the new name automatically. That 'linkage' isn't kept around forever, as soon as you go and edit something other than the snippet, those dotted lines go away and any changes that you make later on don't automatically get propagated. But pretty cool, eh? Another handy feature of snippets, is that for many of them you not only have the ability to "insert" them, you also have the ability to surround other code with them. For instance, if you had highlighted some code, done a right click, you can pick "Surround With.." from that right click menu (it is right below "Insert Snippet" in the right click menu pictured above). Then, only the code snippets that can do a "surround" operation will appear, and you can pick one and add it. And surround does exactly what you might expect - for instance, in the case of the for loop, it will surround the code you had highlighted with the for loop structure pictured above. The keyboard shortcut for this "Surround With" action is ctrl-k + ctrl-s. Ok, enough on using code snippets - lets try and write one! For this tutorial, we are going to create a really simple snippet for performance timing. It will work as both an "insert" and "surround" code snippet, and this will be the code that it produces: ![ Our Snippet Example]() All it does is surround a block of code with a line at the start that captures the start time, and a line at the end that prints out the difference between the start time and the current time. There are two blocks that are set as editable - first, the name of the startTime variable (which will also change the startTime reference at the end of the snippet), and the "My function" part of the string, so that you can put a more descriptive name of exactly what you are timing. So how do we do this? Well, all a code snippet is is an xml file - and a relatively simple one at that. Here is the basic outline of a code snippet xml file: <?xml version="1.0" encoding="utf-8" ?> <CodeSnippets xmlns=""> <CodeSnippet Format="1.0.0"> <Header> </Header> <Snippet> </Snippet> </CodeSnippet> </CodeSnippets> Pretty standard initial xml declaration stuff. There is a CodeSnippets block that can contain one or more CodeSnippet blocks. Each CodeSnippet block has a Header block (for stuff like title and author) and a Snippet block (for the actual snippet code). Below is an example Header block: <Header> <Title>Simple Performance Timing</Title> <Shortcut>Timing</Shortcut> <Description> Code snippet for surrounding a section of code with some performance timing. </Description> <Author>Switch On The Code</Author> <SnippetTypes> <SnippetType>SurroundsWith</SnippetType> <SnippetType>Expansion</SnippetType> </SnippetTypes> </Header> Nothing crazy here - we give the snippet a title, a shortcut (which is what shows up in all the intellisense menus), a description and an author. With the SnippetTypes block, you declare what type(s) this snippet supports - in this case, it can do both "SurroundsWith" and "Expansion" (i.e., insertion). So onto the actual definition of the snippet: <Snippet> <Declarations> <Literal> <ID>startTime</ID> <ToolTip>Beginning Time Variable</ToolTip> <Default>startTime</Default> <> There are two main sections here, Declarations and Code. It is in the declarations that we can declare the special variables that appear in multiple places in the code (like startTime), or variables that just need to be edited when the snippet appears (like the "My function" string). As you can see, the first literal that we declare is startTime - we give it a name to refer to it as in the snippet (in this case "startTime"), a tool tip that will appear for the user describing what the variable is, and a default value. We can then use this literal down inside the code snippet itself - refering to it as $startTime$. The next literal is just as simple - it is the placeholder for the "My function" string, and we call it message. You can see it down in the code in the output string refered to as $message$. The third literal is kind of weird. It is not editable by the user (declared by the Editable="false" attribute), and it has a Function tag. What this allows us to do is have the snippet figure out automatically if it should output the code using the full reference System.Diagnostics.Debug or just the shorthand Debug. If already have a using System.Diagnostics; statement in your code, then there is no need for the full declaration, and the snippet realizes this. The work is actually done in that SimpleTypeName function in the Function tag. You can read more about this and the other available functions for use in snippets in the MSDN docs. And that brings us down to the actual code inside the Code tag. Most of this is the actual code that will appear after insertion, and we already explained what $startTime$ and $message$. There are two other oddities here - $selected$ and $end$. They are both reserved identifiers in code snippets - $selected represents whatever the user had highlighted when they decided to do a "Surround With" snippet, and $end$ signifies where the caret should go after the user finishes inserting the snippet. And that is about it for defining a code snippet. There are, of course, a number of other optional attributes/tags that we did not hit on with this simple snippet, but the MSDN docs do a pretty good job of explaining them. Below is the entire timing code snippet as a single block: <?xml version="1.0" encoding="utf-8" ?> <CodeSnippets xmlns=""> <CodeSnippet Format="1.0.0"> <Header> <Title>Simple Performance Timing</Title> <Shortcut>Timing</Shortcut> <Description> Code snippet for surrounding a section of code with some performance timing. </Description> <Author>Switch On The Code</Author> <SnippetTypes> <SnippetType>SurroundsWith</SnippetType> <SnippetType>Expansion</SnippetType> </SnippetTypes> </Header> <Snippet> <Declarations> <Literal> <ID>startTime</ID> <ToolTip>Beginning Time Variable</ToolTip> <Default>startTime</Default> <Type>long</Type> <> </CodeSnippet> </CodeSnippets> But wait! You might be wondering how you actually add a code snippet for Visual Studio to use after you have created it. You will want to go up to the Tools menu and choose Code Snippets Manager: ![ Visual Studio Tools Once you click on that, you get the Code Snippets Manager dialog: ![ Visual Studio Code Snippet Manager]() And from here it is pretty self explanatory - you can import snippet files (using the aptly named "Import" button), or you can add a folder's worth (in which case, as you add and remove snippets from that folder, they will appear/disappear from the available snippets). I hope you've enjoyed this introduction to Visual Studio snippets. There are a number of resources out there for finding code snippets that other people have written, such as Got Code Snippets. my fingers can not type: is deceased ... i think it died a long time ago. it seems to belong to "J. Michael Palermo IV" Gerry is deceaseds ... i think it died a long time ago. A great write-up, very helpful in getting some useful snippets working in Visual Studio. I notice you even know what you are doing when it comes to image formats -- saved your screen shots as PNG rather than JPG and turned off anti-aliasing. You, sir, know what you are doing. This is a top quality tutorial. Thank you! Great Job!!! Thanks man!!! Remark: a short way adding a snippet to your code, is type the name of the snippet and then pressing TAB key twice. e.g. To insert the for loop snippet type in your code. for + TAB + TAB. Very helpful.You have done a great job Nice. This article explains more than the MS article. woot nice coding Awsome! This explains every bit of Code snippets. Good resource! I wrote up some free snippets for c#, you can get them at: Perfect article, very nice work. nice article Thanks for this tutorial, I was seeking for \$selected\$ and \$end\$, at least I have found it here. Is this anywhere documented by Microsoft? Didn't found it in the MSDN Lib. Thanks. Great (as always!) Thanks a lot, very helpful, I'm now snippiting away! Thanks! Really simple and cool tutorial. Works great for PHP as well :) great job man! I was wanting to read something that would get me aware of the capabilities/uses of snippets and this has done the trick and more! Thanks for our efforts. I landed here through google (q='C# code snippets tutorial') and I wanted to thank you: this short tutorial showed all the stuff I was looking for.
http://tech.pro/tutorial/709/csharp-tutorial-visual-studio-code-snippets
CC-MAIN-2013-48
refinedweb
1,795
70.23
Contributed by Paul Grech. He took NYC Data Science Academy 12 week full time Data Science Bootcamp pr... between Sept 23 to Dec 18, 2015. The post was based on his third class project(due at 6th week of the program). Along with being an electrical engineer and now attending NYCDSA 12 week Data Science bootcamp to become a data scientist, I am also co-owner of a wedding and event coordination company with my wife known as LLG Events Inc. As you could imagine my weeks are filled with exploring fun and interesting data sets and my weekends are filled with running lavish events so at some point during the bootcamp I sat back and thought to myself... hmm wouldn't it be nice to combine the two! Inspired to combine these two elements of my life, I examined all challenges that my wife and I face as business owners and realized that the most difficult part of a service oriented business is how to meet clients. We do not believe in costly advertising that deems vendors "Platinum Vendors" just because they spend the most, we use our reputation to sell us but this also has its limitations... How do we meet new clients outside of our network? How do we expand our network? Fortunately and unfortunately we are IMPATIENT, so how do we speed up the process?!?!?!?! With this challenge ahead, a lightbulb went off. I remember during the planning process for my own wedding where my wife and I designed a fun and simple website with basic information about us and our event hosted by The Knot... and then it clicked! Why don't I web scrape the knot for all of these websites and reach out to those people as potential new clients. Many of our client's didn't even know they wanted a planner until they were stressed out so why not help them out ahead of time! Well, The Knot contains greater than 1 Million wedding websites between previous and future dates. My plan was to scrape all user information in python using BeautifulSoup so that I can get information such as location, date and wedding website url in order to gain some insight on the wedding industry as a whole and hopefully find some potential clients. Seems easy enough right... Just create a couple of for loops to loop through all possibilities of First Name: 'A-Z A-Z' and Last Name: 'A-Z A-Z'... # Create iterable list of first and last names (2 letters) def iterate(let): first_names = [] last_names = [] for i in range(len(let)): for j in range(len(let)): for k in range(len(let)): for l in range(len(let)): first = let[k] + let[l] last = let[i] + let[j] if first == 'aa' or first[0] != first[1]: first_names.append(first) last_names.append(last) return(first_names, last_names) from The Knots search utility as seen below and pull the results off into a database or even a .csv file. What could possibly go wrong? Well, the question wasn't what could go wrong but how much will go wrong. Answer to that is ALOT! As I learned, web scraping is an interesting task because one small change for a website could mean rewriting your entire script. Below I outline some of my main challenges that I faced throughout this process: Let me first start by saying google developer tools is amazing! You can select any component of a webpage and it shows you the exact html responsible for rendering that portion. Therefore there is a two part solution to the embedded java script issue. First I had to recognize the issue understand why the html that should have contained the data was not in the html. The answer... EMBEDDED JAVA SCRIPT! Well, there goes my scraping idea. However, fear not, this is actually good news. I used the chrome developer tool to find The Knot's API call. In order to do so, I entered one search into the "couple search" and watched the Networking tab of chrome developer tools. From there, I was able to find the API call to the database and instead of webscraping, my python projected turned into an API call with a json return. Using python json_normalize, my problem actually made life a bit easier. # Pull request delay time.sleep(1) path = ''+fn[iter]+\ '&lastName='+ln[iter]+\ '&eventMonth='+month+\ '&eventYear='+year+\ '&eventType=Wedding&reset=true&track=true&limit=20&offset=0' request = Request(path) response = urlopen(request) data = response.read() data_json = json.loads(data) # Json to DF couples = json_normalize(data_json['Couples']) Before encountering this problem, I had to first push the limits. Because of my laziness, I did not want to wait for all iterations of first and last name to process. Lets estimate this (first name = fn, last name = ln, L = letter): 26 (fnL1) * 26 (fnL2) * 26 (lnL1) * 26 (lnL2) = 456,976 possibilities Now lets remove all the repetitive letters since no real names start with those (i.e. bb, cc, dd etc...) except for aa since you can have names such as Aaron. 456,976 - 16,900 = 440,076 combinations at 1 per second is 440,076 seconds 440,076 seconds / 60 sec.per.min / 60 min.per.hr / 24 hr.per.day ~ 5 days 5 Days is way above my patience level so what next... well lets make 10 different files that each run specific combinations. In doing so, yes, I reached the limit and my instances would eventually be shut down. In order to counter this a simple delay was added as can be seen in the above code. As can see below, we had bins that would change depending on the index of the file. These bins would create the list of first name and last name combinations that would be put into the API call. Also, in case the script shut down, each instance outputs a .csv file with the current data at 25% intervals. # API first name / last name parameters: # Create letter bin depending on file designator (0-9) letters_bin = len(fn) / 10 start_letters = file * letters_bin if start_letters + letters_bin - 1 > len(fn): end_letters = len(fn) else: end_letters = start_letters + letters_bin - 1 fn = fn[start_letters:end_letters] ln = ln[start_letters:end_letters] # Create bins for output length = len(fn) cbin = 0 bin_size = len(fn) / 4 - 1 bin1 = 0 + bin_size bin2 = bin1 +bin_size + 1 bin3 = bin2 +bin_size + 1 bin4 = length - 1 bins = [bin1, bin2, bin3, bin4] Kind of hard to test a script that needs about a day or so to run so therefore the code was broken into run and test code as seen below: if test == 'test': # Test Parameters letters = ['a', 'b'] (fn, ln) = iterate(letters) month = '0' year = '2016' #Initialization print outs print "First Name: ", fn print "Last Name: ", ln print "Month: ", month print "Year: ", year # Save data once complete length = len(fn) bins = [length-1] cbin = 0 else: # API Time parameters: letters = list(string.ascii_lowercase) (fn, ln) = iterate(letters) month = '0' year = '0' One of the best practices I learned from my time as an electrical engineer in the defense industry whether it was doing development or software verification was to make your code readable, scaleable and testable. This project was a true testament to how useful it is to write your code in sections that can be individually run and verified. These two go hand in hand because error handling adds robustness to your code that can allow it to continue running even if errors are occasionally returned. Often times there would be no results returned from an API call so in order to speed up the process, error handling was included that checked for a no result and would therefore skip the rest of the script and simply move on to the next iteration. try: data_json['ResultCount'] > 0 except: couples['MatchedFirstName'] = fn[iter] couples['MatchedLastName'] = ln[iter] couples['Id'] = 0 else: if data_json['ResultCount'] > 0: # Json to DF couples = json_normalize(data_json['Couples']) Not only did this save time but it made the code much more efficient in the way it was written. Above is a common technique commonly used to handle errors in Python. Practices like this also make the code more robust in that it will not crash if case small adjustments are made to the webpage since the goal of this project is not to get every result but most results. Well, now that we have collected as many results as possible (on the scale of ~3,500 from this date forward in NY, CT and PA) there is some cleaning that had to be done: import pandas as pd import re # Input for file to be cleaned... f = raw_input('File Number for cleaning: ') csv = pd.read_csv('data/'+f+'_couples_prenltk.csv') # Retaining columns of data frame needed for analysis csv = csv[['Id', 'EventDate', 'City', 'Location', 'MatchedFirstName', 'MatchedLastName', 'Registrant2FirstName', 'Registrant2LastName', 'RegistriesSummary', 'Websites']] # Remove all observations that do not have a website csv.dropna(subset = ['Websites'], inplace = True) # Remove extra characters from API output wrapped around website address csv['Websites'] = csv['Websites'].apply(lambda x:''.join(re.findall("(h.*)'", str(x.split(',')[0])))) # Ensure website formatting is correct - testing #print csv.Websites.head(10) # Extract file number and save to new csv file f = f[0] csv.to_csv(f+'_filtered.csv') Output files from the API script that contained all data were read into a cleaning script that kept only the relevant data such as names, locations, dates and websites and presented the information in a clean format. This was used in order to prepare the data for a Shiny application that would allow the customer (well, my wife) to view the results of all my effort! Just like any great product, all the fancy tech work needs to be in the background with the results front and center. As a way of displaying my results, a Shiny Dashboard app was created that allowed the user to filter each variable and select which variables to show in the results table. Embedded URL links were added for easy click and go presentation as well. In the future, I would like to create a crawler that can then go through each couples website and look for specific venue names as a way to progressing our business in locations of our choice. Also, forecasting an approximate "engagement season" would also allow for accurate business planning and preparation on our part. Views: 2693 Comment © 2019 Data Science Central ® Badges | Report an Issue | Privacy Policy | Terms of Service You need to be a member of Data Science Central to add comments! Join Data Science Central
https://www.datasciencecentral.com/profiles/blogs/web-scraping-for-new-business
CC-MAIN-2019-51
refinedweb
1,765
59.13
Number 19 of 20: per-thread p-tools By ahl on Aug 06, 2004 (the associated file, shmid, etc.). pldd(1) is another example; it shows which shared objects a process has opened. Other p-tools apply to the threads in a process. The pstack(1) utility shows the call stacks for each thread in a process. New in Solaris 10 Eric and Andrei have modified the p-tools that apply to threads so that you can specify the threads you're interested in rather than having to sift through all of them. pstack(1) Developers and administrators often use pstack(1) to see what a process is doing and if it's making progress. You'll often turn to pstack(1) after prstat(1) or top(1) shows a process consuming a bunch of CPU time -- what's that guy up to. Complex processes can many many threads; fortunately prstat(1)'s -L flag will split out each thread in a process as its own row so you can quickly see that thread 5, say, is the one that's hammering the processor. Now rather than sifting through all 100 threads to find thread 5, you can just to this: $ pstack 107/5) Alternatively, you can specify a range of threads (5-7 or 11-), and combinations of ranges (5-7,11-). Giving us something like this: $ pstack 107/5-7,11-) ----------------- lwp# 6 / thread# 6 -------------------- c2a0314c nanosleep (c24edfb0, c24edfb8) 080577d6 getnode_revalidate (0) + 4b c2a02d10 _thr_setup (c2949400) + 50 c2a02ed0 _lwp_start (c2949400, 0, 0, c24edff8, c2a02ed0, c2949400) ----------------- lwp# 7 / thread# 7 -------------------- c2a0314c nanosleep (c23edfb0, c23edfb8) 08055f56 getgr_revalidate (0) + 4b c2a02d10 _thr_setup (c2949800) + 50 c2a02ed0 _lwp_start (c2949800, 0, 0, c23edff8, c2a02ed0, c2949800) ----------------- lwp# 11 / thread# 11 -------------------- c2a0314c nanosleep (c1fcdf60, c1fcdf68) 0805887d reap_hash (80ca918, 8081140, 807f2f8, 259) + ed 0805292a nsc_reaper (807f92c, 80ca918, 8081140, 807f2f8, c1fcdfec, c2a02d10) + 6d 08055ded getpw_uid_reaper (0) + 1d c2a02d10 _thr_setup (c20d0800) + 50 c2a02ed0 _lwp_start (c20d0800, 0, 0, c1fcdff8, c2a02ed0, c20d0800) ... The thread specification syntax also works for core files if you're just trying to drill down on, say, the thread that caused the fatal problem: $ pstack core/2 core 'core/2' of 100225: /usr/sbin/nscd ----------------- lwp# 2 / thread# 2 -------------------- c2a04888 door (c28fbdc0, 74, 0, 0, c28fde00, 4) 080540bd ???????? (deadbeee, c28fddec, 11, 0, 0, 8053d33) c2a0491c _door_return () + bc truss(1) The truss(1) utility is the mother of all p-tools. It lets you trace a process's system calls, faults, and signals as well as user-land function calls. In addition to consuming pretty much every lower- and upper-case command line option, truss(1) now also supports the thread specification syntax. Now you can follow just the threads that are doing something interesting: truss -p 107/5 openat(-3041965, ".", O_RDONLY|O_NDELAY|O_LARGEFILE) = 3 fcntl(3, F_SETFD, 0x00000001) = 0 fstat64(3, 0x08047800) = 0 getdents64(3, 0xC2ABE000, 8192) = 8184 brk(0x080721C8) = 0 ... pbind(1) The pbind(1) utility isn't an observability tool, rather this p-tool binds a process to a particular CPU so that it will only run on that CPU (except in some unusual circumstances; see the man page for details). For multi-threaded processes, the process is clearly not the right granularity for this kind of activity -- you want to be able to bind this thread to that CPU, and those threads to some other CPU. In Solaris 10, that's a snap: $ pbind -b 1 107/2 lwp id 107/2: was not bound, now 1 $ pbind -b 0 107/2-5 lwp id 107/2: was 1, now 0 lwp id 107/3: was not bound, now 0 lwp id 107/4: was not bound, now 0 lwp id 107/5: was not bound, now 0 These are perfect examples of Solaris responding to requests from users: there was no easy way to solve these problems, and that was causing our users pain, so we fixed it. After the BOF at OSCON, a Solaris user had a laundry lists of problems and requests, and was skeptical about our interest in fixing them, but I convinced him that we do care, but we need to hear about them. So let's hear about your gripes and wish lists for Solaris. Many of the usability features (the p-tools for example) came out of our own use of Solaris in kernel development -- once OpenSolaris lets everyone be a Solaris kernel developer, I'm sure we'll be stumbling onto many more quality of life tools like pstack(1), truss(1), and pbind(1). Posted by Dennis Clarke on August 06, 2004 at 01:47 PM PDT # Posted by Octave Orgeron on August 09, 2004 at 07:03 AM PDT # Posted by Adam Leventhal on August 11, 2004 at 04:37 AM PDT # If people's eyes pop out when you show them p-tools wait until you show them this stuff (in particular tracing a thread through user-land into the kernel). Posted by Adam Leventhal on August 11, 2004 at 04:41 AM PDT #
https://blogs.oracle.com/ahl/entry/number_19_of_20_per
CC-MAIN-2015-40
refinedweb
831
64.78
Summary: To match a pattern in a given text using only a single line of Python code, use the one-liner import re; print(re.findall(pattern, text)) that imports the regular expression library re and prints the result of the findall() function to the shell. Problem: Given a string and a regular expression pattern. Match the string for the regex pattern—in a single line of Python code! Example: Consider the following example that matches the pattern 'F.*r' against the string 'Learn Python with Finxter'. import re s = 'Learn Python with Finxter' p = 'F.*r' # Found Match of p in s: 'Finxter' Let’s dive into the different ways of writing this into a single line of Python code! Exercise: Run the code. What’s the output of each method? Why does the output differ? Do you want to master the regex superpower? Check out my new book The Smartest Way to Learn Regular Expressions in Python with the innovative 3-step approach for active learning: (1) study a book chapter, (2) solve a code puzzle, and (3) watch an educational chapter video. Method 1: findall() The re.findall(pattern, string, flags=0) method returns a list of string matches. Read more in our blog tutorial. # Method 1: findall() import re; print(re.findall('F.*r', 'Learn Python with Finxter')) # ['Finxter'] There’s no better way of importing the re library and calling the re.findall() function in a single line of code—you must use the semicolon A;B to separate the statements A and B. The findall() function finds all occurrences of the pattern in the string. Method 2: search() The re.search(pattern, string, flags=0) method returns a match object of the first match. Read more in our blog tutorial. # Method 2: search() import re; print(re.search('F.*r', 'Learn Python with Finxter')) # <re.Match object; span=(18, 25), The search() function finds the first match of the pattern in the string and returns a matching object Method 3: match() The re.match(pattern, string, flags=0) method returns a match object if the regex matches at the beginning of the string. Read more in our blog tutorial. # Method 3: match() import re; print(re.match('.*F.*r', 'Learn Python with Finxter')) # <re.Match object; span=(0, 25), The match() function finds the match of the pattern at the beginning of the string and returns a matching object. In this case, the whole string matches, so the match object encloses the whole string. Method 4: fullmatch() The re.fullmatch(pattern, string, flags=0) method returns a match object if the regex matches the whole string. Read more in our blog tutorial. # Method 4: fullmatch() import re; print(re.fullmatch('.*F.*r.*', 'Learn Python with Finxter')) #<re.Match object; span=(0, 25), The fullmatch() function attempts to match the whole string and returns a matching object if successful. In this case, the whole string matches, so the match object encloses the whole string.!
https://blog.finxter.com/python-one-line-regex-match/
CC-MAIN-2020-40
refinedweb
499
76.62
Details Description Greg, hello: 1) in this example you say: "Note that this example makes use of the global resource cache to store the loaded images." 2) which reminds of the "java evil singleton pattern" and the consensus seems to be: "Singletons are fine as long as you don't implement them with static" 3) and pivot uses them freely; few examples public abstract class ApplicationContext { protected static ArrayList<Display> displays = new ArrayList<Display>(); protected static ArrayList<Application> applications = new ArrayList<Application>(); private static HashMap<URI, Object> resourceCache = new HashMap<URI, Object>(); private static ResourceCacheDictionary resourceCacheDictionary = new ResourceCacheDictionary(); public abstract class Theme { private static Theme theme = null; 4) do you think is it feasible to convert all of pivot into SPI: ServiceLoader<PivotFrameworkFactory> loader = ServiceLoader.load(PivotFrameworkFactory.class); PivotFrameworkFactory factory = loader.iterator().next(); Map<String, Object> config = new HashMap<String, Object>(); PivotFramework framework = factory.newFramework(config); 5) the two use cases that affect me now: a) applet reload shares static b) pivot + osgi (not sure yet if I can find a workaround; possibly depends on what you do about PIVOT-742 PIVOT-22) thank you; Andrei Activity - All - Work Log - History - Activity - Transitions Yes, it is possible to refactor this code to use a service provider. However, I'm not sure that it is justified. Can you elaborate a bit more on the problem(s) you are trying to solve? APPLETS: 1) this project: with this launcher: 2) uses 3 files 3) with 1 static counter: public class WindowExtra extends Window implements Bindable { protected static final Logger log = LoggerFactory .getLogger(WindowExtra.class); private static final AtomicInteger COUNTER = new AtomicInteger(0); @Override public void initialize(Map<String, Object> namespace, URL location, Resources resources) } 4) you can try for yourself here: when you select launch mode "applet (plugin V1)" (AKA "legacy", "non-jnlp", "apple-default") you will get an applet with counter that gets incremented on every applet reload; since I it is hard to tell how all the static fields in pivot interact, my concerns with pivot are: - memory leaks; - cross-page applet interferences; - security; thanks. BTW, speaking of "evil static": if I run both of these at the same time in the same firefox on linux, in different tabs: then both of them become unresponsisve, seeminly locking in this part (in 2 different jvm instances!): org.apache.pivot.wtk.ApplicationContext$DisplayHost.paintVolatileBuffered if I kill one of them, the other starts running just fine; I guess video driver writers use a lot of static fields also the above "static in video driver" is actually reproducible in google chrome also; which likely means that it is a "feature" of pivot killing nsplugin via volatile memory access; since I cant remember any other applets / frameworks with this "feature", probably this can be fixed in pivot? another confirmation that it is likely pivot + nsplugin: if you run with this as a stand alone app then there is no locking any more The static fields are there by design. Some of them, like the Display list, are used specifically to support applets. The issues you describe could very easily be due to the plugin implementation on Linux. Pivot does not use any native code and so should run the same on all JVM implementations. APPLETS: 1) re: "The issues you describe could very easily be due to the plugin implementation on Linux" 2) I verified this page: on: - windows 7 (6.1 b 7600) java version "1.6.0_23" Java(TM) SE Runtime Environment (build 1.6.0_23-b05) Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode) - firefox 4.0.1 - internet explorer 8.0.7600 - google chrome 11.0.696.68 - ubuntu 10.04 2.6.32-29-generic #58-Ubuntu SMP Fri Feb 11 20:52:10 UTC 2011 x86_64 GNU/Linux java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode) - firefox 3.6.17 - google chrome 11.0.696.68 - macosx 10.6.7 java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) - firefox 3.6.6 - google chrome 11.0.696.68 - safari 5.0.5 3) and confirmed that "counter still goes up"; which means that the perferred way you use to start all your pivot apps: var attributes ={ code:"org.apache.pivot.wtk.BrowserApplicationContext$HostApplet", width:"100%", height:"100%" } ; var libraries = []; libraries.push("/lib/pivot-core-2.0.jar"); libraries.push("/lib/pivot-wtk-2.0.jar"); libraries.push("/lib/pivot-wtk-terra-2.0.jar"); libraries.push("/lib/pivot-tutorials-2.0.jar"); libraries.push("/lib/svgSalamander-tiny.jar"); attributes.archive = libraries.join(","); var parameters = ; var javaArguments = ["-Dsun.awt.noerasebackground=true","-Dsun.awt.erasebackgroundonresize=true"]; parameters.java_arguments = javaArguments.join(" "); deployJava.runApplet(attributes, parameters, "1.6"); 4) which is so called "legacy plugin mode", will share all static fields between applet invocations, even if you navigate away form applet and then come back (the timeout is about 2 minutes for jvm to expire, when there are no active applets, but static fields are still active before jvm expires) 5) re "The static fields are there by design"; indeed, I understand; I am merely asking to improve the design; what I would ideally want is: a) instaniate all of pivot as a single instance via spi; b) no static fields - based singletons inside pivot framework; c) pivot caches watever it wants in singletons (which are not static fields); re: "Some of them, like the Display list, are used specifically to support applets" note that if you switch form "legacy plugin mode" (non-jnlp) into "next gen plugin mode" (yes-jnlp) there are no shared static fields any more. and if pivot makes assumptions about presense of shared static fields it is a bug. > I verified this page...and confirmed that "counter still goes up" That is the correct behavior. As you note below, when applets share a common VM instance and classloader, all static values are also shared. > the perferred way you use to start all your pivot apps...which is so called "legacy plugin mode", will share all static fields between applet invocations There is no "preferred" way to start Pivot applets. You are free to use the separate_jvm parameter if you want. Also, to my knowledge, there is no such thing as "legacy" plugin mode. If you set separate_jvm to true, the plugin should create a separate JVM instance, though it may not. If you omit separate_jvm or set it to false, you get a shared VM. Both modes are supported by the plugin and by the Pivot runtime. re: AP: "org.apache.pivot.wtk.ApplicationContext$DisplayHost.paintVolatileBuffered locking" re: GB: "due to the plugin implementation on Linux." yes; I verified paintVolatileBuffered locking is specific to linux re: "legacy vs next gen" sorry I meant to say "classic vs next gen"; meaning before java 1.6.0_10 and after there is no "separate_jvm=true" before that; and after that, if you start in jnlp mode still in the same instance of jvm you get different classloader; Greg: re: "There is no "preferred" way to start Pivot applets." yes, you right, good point. except on macosx they already "preferred" if for you: this applet: (with separate_jvm:true) does load new jvm instance on linux, but on mac you first need to go to control panel and change java plugin mode to what I call "V2 or next gen or jnlp mode"; (else no jdk 6_10 features are available, despite you have 6_24 or watever) and you never want to ask your apple customers to do that, right? cheers! Andrei FWIW, this is one of the reasons we repositioned Pivot as an "IIA" platform when we released version 2. Applets are a pain! oh, man, I almost have my holy applet grail! if only your would fix static singeltons and Sandro fix osgi... but you not going to to it, because you do not want to make pivot so good that it will kill javafx? Fixing the classloader issue is probably worth doing, but I'm still not clear on what problem eliminating the use of static fields would solve. re: "classloader issue is probably worth doing" - great, thank you! re: "what problem eliminating the use of static" - fine, I'll be back I'm not against this, but at the moment I'd prefer to not complicate too much the usage in standard (not-OSGi) cases, so I assign it to 2.1 for the moment. On the Applets issue maybe this could be more important for us (could have more occurrences in our usual cases) in the short term. What others say ?
https://issues.apache.org/jira/browse/PIVOT-743
CC-MAIN-2017-47
refinedweb
1,471
52.09
Home >>C++ Tutorial >C++ Comments The statements that are not executed by the compiler are known as the C++ comments. In order to provide the explanation of the variable, method, code or class, the comments in the C++ programming is used. The user can hide the program with the help of C++ comments. There are generally two types of comments in the C++ programming: The single line comment generally starts with double forward slashes. Here is an example of the single line comment in C++: #include <iostream> using namespace std; int main() { int a = 10; // Here a is a variable cout<<a<<"\n"; } Comments that are used to comment multiple lines of code are known as the Multi-line Comments. This comment is denoted by the forward slash and asterisk (/*……*/). Here is an example of the multiple line comments that will help you in understanding the concept: #include <ostream> using namespace std; int main() { /*Define a variable and print the variable in C++. */ int a = 20; cout<<x<<"\n"; }
http://www.phptpoint.com/cpp-comments/
CC-MAIN-2021-10
refinedweb
169
59.13
A conflict that has come up a few times is using composition and a pass through or a base class I favor pass through; I write it that way Everytime. The other engineer on the project favors a base class. Here's an example of what we've encountered a few times in the project at work. public class CacheFoo : IFoo{ private ICache<IFoo> _cache: Private IFooer _fooer: //Berevity Public IBar IFoo.ABar() => Cache().ABar(); Private IFoo Cache()=>_cache.Retrieve(_fooer.Load()); } This is a pass through. It knows how to cache. The other version is more of a factory public class CacheFooFactory : IFooFactory{ private ICache<IFoo> _cache: Private IFooer _fooer: //Berevity Public IFoo Foo => _cache.Retrieve(_fooer.Load()); } Looking at this; I find value in both. I find myself implementing the pass through most often. Which we refactor into the other when the other engineers get in. Why? I don't have a solid opinion on which is "better". When I can't find a reason my favored should win; I'll lean to what the other engineers favor. Often this is so that it'll eat at me and the brain will find the argument/answer for me. A dozen implementations of passthrough refactored to factory - I don't have winning arguments for either. This is that I can't even convince myself that my favored is "right". I like the flow of treating the CacheFoo as an IFoo. The code reads simpler to me. It has to know how to do less. The implementation of the collaborator can change; but the behavior doesn't. While this is true of the Factory as well... I'm seeing that the point which may become my main thrust - I'll see how it flies with the team on Monday - A factory as a collaborator forces all instances of the concrete classes to have an associated factory. CacheFoo must have a CacheFooFactory. FileFoo must have a FileFooFactory. NetworkFoo must... Each concrete IFoo; for appropriate polymorphism and decoupling; must have an associated Factory. Require it for one - it's driven to all. Or else you rewrite you class implementation from IFooFactory because you need a different Foo. This is wrong. While I was heavily leaning to the pass through; this provides me with a compelling argument. It's not less layers; or less abstraction; the win here is the simplicity of the implementing code. The stupidity of the implementing code. It doesn't have to care that what it has doesn't DO it for us - it will get it done for us. Factories are a code smell to me; this is a factory - it's really started to stink. Neat. I'll be sticking to passthroughs in new code. I'll leave work's paradigm alone... For now! Hehe Tangent - I enjoy how writing this stuff out really helps me think it through and solidify points and be able to communicate the notions in my head.
https://quinngil.com/2018/04/22/my-thoughts-pass-through-vs-base-class/
CC-MAIN-2020-29
refinedweb
495
75.3
it was solved. The issue Quoting the Lighthouse ticket, imagine the following scenario in Rails 2.3: class) To understand why it does not work, let’s take a look at the source code! Investigating the issue With Active Relation, Active Record is no longer responsible to build queries. That said, ActiveRecord::Base is not the one that implements where() and friends, in fact, it simply delegates to an ActiveRecord::Relation object. From ActiveRecord::Base source code: As you can see, scoped always returns an ActiveRecord::Relation that you build your query on top of (notice that ARel::Relation is not the same as ActiveRecord::Relation). Besides, if there is any current_scoped_methods, the scoped method is responsible to merge this current scope into the raw relation. This is where things get interesting. When you create your model, current_scoped_methods returns by default nil. However, when you define a default_scope, the current scope now becomes the relation given to default_scope, meaning that, every time you call scoped, it returns the raw relation merged with your default scope. The whole idea of with_exclusive_scope is to be able to make a query without taking the default scope into account, just the relation you give in as argument. That said, it basically sets the current_scope_methods back to nil, so every time you call scoped to build your queries, it will be built on top of the raw relation without the default scope. With that in mind, if we look again at the code which we were trying to port from Rails 2.3, we can finally understand what was happening: def self.deleted with_exclusive_scope :find => where('pages.deleted_at IS NOT NULL') do self end end When we called where('pages.deleted_at IS NOT NULL') above, we were doing the same as: scoped.where('pages.deleted_at IS NOT NULL'). But, as scoped was called outside the with_exclusive_scope block, it means that the relation given as argument to :find was built on top of default_scope explaining the query we saw as results. For example, the following syntax would work as expected: def self.deleted with_exclusive_scope do where('pages.deleted_at IS NOT NULL').all end end Since we are calling where inside the block, the scoped method no longer takes the default scope into account. However, moving the relation inside the block is not the same as specifying it to :find, because if we were doing three queries inside the block, we would have to specify the same relation three times (or refactor the whole code to always do a query on top of this new relation). That said, it seems the previous with_exclusive_scope syntax does not suit very well with ActiveRecord’s new API. Maybe is it time for change? Can we provide a better API? Which are the use cases? Identifying the use cases The with_exclusive_scope method has mainly two use cases. The first one, which we just discussed above, is to allow us to make a query without taking the default scope into account inside our models: def self.deleted with_exclusive_scope do where('pages.deleted_at IS NOT NULL').all end end While this code looks ok, if we think about relations, we will realize that we don’t need to give a block to achieve the behavior we want. If the scoped method returns a raw relation with the default scope, couldn’t we have a method that always returns the raw relation? Allowing us to build our query without taking the default scope into account? In fact, this method was already implemented in Active Record and it is called unscoped. That said, the code above could simply be rewritten as: def self.deleted unscoped.where('pages.deleted_at IS NOT NULL').all end Much simpler! So, it seems that we don’t need to support the block usage at all, confirm? Deny! Going back to the Page example above, it seems we should never see deleted pages, that’s why we set the default_scope to :deleted_at => nil. However, if this application has an admin section, the admin may want to see all pages, including the deleted ones. That said, what we could do is to have one controller for the normal User and another for the Admin. In the former, we would always use Page.all, and Page.unscoped.all in the latter. However, if these controllers and views are very similar, you may not want do duplicate everything. Maybe it would be easier if we do something like this: def resource_class if current_user.is_admin? Page.unscoped else Page end end And, instead of always referencing the Page class directly in our actions, we could call resource_class. While this solution is also ok, there is a final alternative, that would require no changes to the current code. If you want to use the same controller for different roles, but changing the scope of what they are allowed to see, you could simply use an around_filter to change the model scope during the execution of an action. Here is an example: end Tidying it up Well, after the behavior in with_exclusive_scope was properly ported to the new API, we need to be sure we are not forgetting about anything... wait, actually we are. with_exclusive_scope has an evil twin brother called with_scope which behaves very similarly, except that it always build the query on top of the scoped relation. It works like this: So, with unscoped and scoping implemented, we just need to commit, git push and be happy, confirm? Deny! There is one last case to check. create_with If you payed attention properly, you can notice that every time we called with_exclusive_scope and with_scope, we always passed { :find => relation } as hash, instead of simply giving the relation. This happens because these methods accept two hash keys: find and create. As you may expect, one specifies the behavior for create and the other for finding. In most of the cases, they are exactly the same and work with the new syntax: Note this syntax already existed, we are just making it explicit now as part of the new API! That said, commit, push and be happy! Wrapping up All in all, with_exclusive_scope and with_scope are now part of the old ActiveRecord API giving place to the new, strong and vibrant unscoped and scoping methods! However, they are not going to be deprecated now. They will follow the same deprecation strategy as all the current methods. And you? What do you think about this new scoping API? DrinkRails.com linked to your post as one of the top ruby on rails related blogs of the day. This whole un-hashing to method-chaining changes fit really well in ActiveRelation, yet I kinda don't like RSpec syntax (after using it for one year)… Great article Jose! Awesome! Thanks José 🙂 I guess we'll will find out once everyone really starts using with this stuff, but all of that sounds like a natural progression, so, I like it. Very nice. This stuff has been bugging/biting me for a while.
http://blog.plataformatec.com.br/2010/07/new-active-record-scoping-syntax/
CC-MAIN-2020-16
refinedweb
1,171
63.39
Welcome to std2 ATTENTION: Due to the advances in the D2 compiler, and also to increased and more advanced usage of D2 features the standard library, as of DMD 2.011 I find that it is quite difficult to backport D2 code to D1, and it looks like it will not be possible without spending huge amounts of time on it. So I'm afraid std2 is going to stay with the 2.008 version forever. My apologies. On the bright side, it is nice to see Phobos under such active development again, even if it is D2 only. The std2 project is a collection of ports of Phobos 2.x modules to D1.x. Why? Phobos in D2.x contains some nifty additions. Most of these have nothing to do with the additional language features of D2.x, and can be ported fairly easily to D1.x, mosly by stripping out the const and invariant everywhere. Unfortunately, the official D maintainers have declared that neither D1.x nor Phobos 1.x will have any new functionality added. So this project aims to make some of the Phobos 2.x functionality available to D1.x users via a 'std2' package. Currently the port consists of the following Phobos modules, ported from DMD Phobos 2.008: - std2.algorithm (based on std.algorithm) - std2.bitmanip (based on std.bitmanip) - std2.contracts (based on std.contracts) - std2.conv (based on std.conv) - std2.functional (based on std.functional) - std2.getopt (based on std.getopt) - std2.numeric (based on std.numeric) - std2.random (based on std.random) - std2.string (based on std.string) - std2.traits (based on std.traits) - std2.variant (based on std.variant) - std2.xml (based on std.xml) - std2.encoding (based on [DSpec2x:phobos/std_encoding std.encoding) Plus a few others that have only had minor changes in D2 (std.process, std.math). The idea is that you substitute one of those std2 modules in place of an import of the regular std module to get the new functionality. The ports are currently maintained by Bill Baxter. I don't plan to spend a whole lot of time on this, but it shouldn't require a whole lot of time. At least not for the time being. The above modules took about an hour to port altogether. Volunteers to port new modules and apply updates when new DMD's are released are certainly welcome and encouraged. Any developers who would like SVN commit access, let me know. Installation Using DSSS Std2 can be installed using dsss via its net install feature. All it takes is to type the following command at a command prompt: dsss net install std2 From SVN You can also download the source from svn using an svn client. The repository is at For example, using a command line SVN client: svn co This will create the directory tree with ./std2, ./std2/dsss.conf, and ./std2/std2/*.d. Links Quick Example import std.stdio; import std2.conv; // std2.conv includes all of std.conv's functionality, too void main() { int a = 42; auto b = to!(int)(a); // b is int with value 42 auto c = to!(double)(3.14); // c is double with value 3.14 } Project Information Attachments - encoding.d (71.6 kB) - std2.encoding, added by y0uf00bar on 08/12/09 03:44:10. - xml.d (83.5 kB) - std2.xml, added by y0uf00bar on 08/12/09 04:05:41.
http://www.dsource.org/projects/std2
CC-MAIN-2016-18
refinedweb
575
70.5
NAME kqueue_add_filteropts, kqueue_del_filteropts, kqfd_register, knote_fdclose, knlist_add, knlist_remove, knlist_remove_inevent, knlist_empty, knlist_init, knlist_destroy, knlist_clear, knlist_delete, KNOTE_LOCKED, KNOTE_UNLOCKED - event delivery subsystem SYNOPSIS #include <sys/event.h> int kqueue_add_filteropts(int filt, struct filterops *filtops); int kqueue_del_filteropts(int filt); int kqfd_register(int fd, struct kevent *kev, struct thread *td, int waitok); void knote_fdclose(struct thread *td, int fd); void knlist_add(struct knlist *knl, struct knote *kn, int islocked); void knlist_remove(struct knlist *knl, struct knote *kn, int islocked); void knlist_remove_inevent(struct knlist *knl, struct knote *kn); int knlist_empty(struct knlist *knl); void knlist functions kqueue_add_filteropts() and kqueue_del_filteropts() allow for the addition and removal of a filter type. The filter is statically defined by the EVFILT_* macros. The function kqueue_add_filteropts() will make filt available. The struct filterops has the following members: f_isfd If f_isfd is set, ident in struct kevent is taken to be a file descriptor. In this case, the knote passed into f_attach will have the kn_fp member initialized to the struct file * that represents the file descriptor. f_attach The f_attach function will be called when attaching a knote to the object. The method should call knlist_add() to add the knote to the list that was initialized with knlist_init(). The call to knlist_add() is only necessary if the object can have multiple knotes associated with it. If there is no knlist to call knlist_add() with, the function f_attach must clear the KN_DETACHED bit of kn_status in the knote. The function shall return 0 on success, or appropriate error for the failure.(). f_event The f_event function will be called to update the status of the knote. If the function returns 0, it will be assumed that the object is not ready (or no longer ready) to be woken up. The hint argument will be 0 when scanning knotes to see which are triggered. Otherwise, the, knlist_remove_inevent() must be called. The function knlist_remove_inevent() will remove the note from the list, the f_detach function will not be called and the knote will not be returned as an event. Locks must not be acquired in f_event. If a lock is required in f_event, it must be obtained in the kl_lock function of the knlist that the knote was added to. The function kqfd_register() will register the kevent on the kqueue file descriptor fd. If it is safe to sleep, waitok should be set. The function knote_fdclose() is used to delete all knotes associated with fd. Once returned, there will no longer be any knotes associated with the fd. The knotes removed will never be returned from a kevent(2) call, so if userland uses the knote to track resources, they will be leaked. The FILEDESC_LOCK() lock must be held over the call to knote_fdclose() so that file descriptors cannot be added or removed. The knlist_*() family of functions are for managing knotes associated with an object. A knlist is not required, but is commonly used. If used, the knlist must be initialized with The function kqueue_add_filteropts() will return zero on success, EINVAL in the case of an invalid filt, or EEXIST if the filter has already been installed. The function kqueue_del_filteropts() will return zero on success, EINVAL in the case of an invalid filt, or EBUSY if the filter is still in use. The function kqfd_register() will return zero on success, EBADF if the file descriptor is not a kqueue, or any of the possible values returned by kevent(2). SEE ALSO kevent(2), kqueue(2) AUTHORS This manual page was written by John-Mark Gurney 〈jmg@FreeBSD.org〉.
http://manpages.ubuntu.com/manpages/maverick/man9/knlist_clear.9freebsd.html
CC-MAIN-2014-41
refinedweb
581
60.85
GTK (GIMP Toolkit) is a library for creating graphical user interfaces. It is licensed using the LGPL license, so you can develop open software, free software, or even commercial non-free software using GTK without having to spend anything for licenses or royalties. Gtk# is a binding for mono to this great toolkit, that is usable from any Mono language including MonoBasic and C#. There is also a binding for .NET Framework v1.1 on Windows. Gtk can be installed for both Windows and Linux, though the Linux version is a little faster and more stable as compared to the Windows version. I am going to show how to create a cross-platform application using .NET and Gtk#. The development environment I describe here is based on windows. However you can also use Linux as your development environment. The application will perform an internet search using the Google webservice. Firstly, you need to have a google developer account. If you don’t have an account you can create one at Once your account is created you’ll receive a licence key from google which is 32 byte alphanumeric string e.g. 0AdaQ5lQFHJmWhI5UACEjtgIcwRezTJ5. Keep this safe since this is required for every web service call you make to Google web service. The development environment on windows for Gtk# apps consists of the following: (If you don’t have Visual Studio you can also use SharpDevelop. You can either use .NET Framework v1.1 for development or configure your Visual Studio to use Mono.) The executables produced can be run directly on Linux workstations with Mono and Gtk# installed. They can also be run on any windows workstation which has .NET Framework 1.1 (or Mono) and Gtk# installed. Note: You need to install Glade only on the development machines. Now that you have your development platform setup let me explain the various components and how they interact with each other. Glade is a click-n-drag UI designer. It is used to create the user interface for your application. Once the user interface is complete it saves the UI as an XML file. This xml file is used by the .NET(or Mono) application to build user interface at runtime. The application uses the Gtk# and Glade# library (Glade# is packaged with the Gtk#) to read the XML and create a UI. The first step is to build the user interface. Open Glade. If you can’t find it on your desktop, it should be present in C:\Program Files\Gtk\bin. You should see 3 windows open up. Click on the window icon in the Palette window. A window is created. Change the window properties using the Properties dialog. Set Window name to mainWindow and Title to “Gtk# - Google Search”. You should also set the default width and height. Next click the vertical box button ( ) and click in the window. Select 2 rows when prompted. Your window will now be divided into 2 parts. To make space to add a textbox and buttons, click on the horizontal box ( ) and click on the top half of the vertical box you created earlier. Choose 3 columns when prompted. In the three columns of the top half of the window, add a text entry and 2 buttons. After you do so your interface should like this. Now right click on the top half of the window and choose hbox->select. Change its Packing property so that Expand and Fill are “No”. Click on button1 in the window and change its Name property to btnSearch, Label to Search and Icon property to Find. Click on button2 in the window and change its Name property to btnClear, Label to Clear and Icon property to Clear. Now your window should look like this. Now all you need to do is to add in a search results box. Add a text view component to the lower half of the window. At this point the interface should look like this. Now save the interface In the save options, you can ignore the “C Options” tab and the “LibGlade Options” tab. That should complete your UI design. Now you’ll go on to linking up the code with the user interface. Fire up your visual studio 2003 and create a new empty C# project. Call it googleSearch. Add Reference to the following assemblies: atk-sharp, gtk-sharp, gdk-sharp, glade-sharp, glib-sharp. In the project properties set the Output Type to “Windows Application” Add the googleSearch.glade to the project. Right-click on the filename in the solution explorer and change the Build Action property to “Embedded Resource”. This ensures that the file is embedded inside the executable produced by compiling the project. Lets add a web reference to the Google Web Service. Right click on References in the solution explorer and choose to add a new web reference. Type the url and press Go. You should see the following screen. Press Add Reference. This creates the classes necessary to connect to the google web service. (If you are not using Visual Studio, you can do the same by using the wsdl.exe command line program which is a part of the .NET Framework SDK and can be found at C:\Program Files\Microsoft.NET\SDK\v1.1\Bin.) Add a new class called GtkApp.cs. using System; using Gtk; using Glade; using googleSearch.com.google.api; namespace googleSearch { /// <summary> /// Summary description for GtkApp. /// </summary> public class GtkApp { [Widget]Window mainWindow; [Widget]Button btnSearch; [Widget]Button btnClear; [Widget]Entry entry1; [Widget]TextView textview1; /* Main Function */ public static void Main (string[] args) { new GtkApp (args); } public GtkApp(string []args) { Application.Init(); Glade.XML gxml = new Glade.XML (null, "googleSearch.googlesearch.glade", "mainWindow", null); gxml.Autoconnect (this); mainWindow.DeleteEvent+=new DeleteEventHandler(mainWindow_DeleteEvent); btnSearch.Clicked+=new EventHandler(btnSearch_Clicked); btnClear.Clicked+=new EventHandler(btnClear_Clicked); Application.Run(); } private void mainWindow_DeleteEvent(object o, DeleteEventArgs args) { Application.Quit (); args.RetVal = true; } private void btnSearch_Clicked(object sender, EventArgs e) { try { textview1.Buffer.Text=""; //do search GoogleSearchService svc=new GoogleSearchService(); GoogleSearchResult result =svc.doGoogleSearch("FEkTsl8kwgQtrQIeGHJDSELrTtjNcp1eC", entry1.Text,0,10,true,"",true,"","",""); foreach(ResultElement re in result.resultElements) { textview1.Buffer.Text+="Title: "+re.title+"\r\n"; textview1.Buffer.Text+="Summary: "+re.summary+"\r\n"; textview1.Buffer.Text+="URL: "+re.URL+"\r\n"; textview1.Buffer.Text+="==========================\r\n\r\n"; } } catch(Exception ex) { Gtk.MessageDialog md=new MessageDialog(mainWindow,Gtk.DialogFlags.Modal, Gtk.MessageType.Error,Gtk.ButtonsType.Ok,ex.Message); md.Show(); } } private void btnClear_Clicked(object sender, EventArgs e) { textview1.Buffer.Text=""; entry1.Text=""; } } } Now build and start the project. You should see a window that looks like the following. Type some terms in the text box and press search. You’ll see search results, which are returned by Google. I’ll go through some of the more important parts of the code now. [Widget]Window mainWindow; This line defines mainWindow to be of type Gtk.Window. The name must match the name defined in Glade. The same holds true for the remaining declarations. Application.Init(); This line initializes the Gtk Application. Glade.XML gxml = new Glade.XML (null, "googleSearch.googlesearch.glade", "mainWindow", null); gxml.Autoconnect (this); This loads the XML file googlesearch.glade and defines mainWindow to be the root element. The Autoconnect call links up widget names with the declarations made previously. E.g. The variable mainWindow defined in the class declaration is linked to the mainWindow defined in the googlesearch.glade file. mainWindow.DeleteEvent+=new DeleteEventHandler(mainWindow_DeleteEvent); btnSearch.Clicked+=new EventHandler(btnSearch_Clicked); btnClear.Clicked+=new EventHandler(btnClear_Clicked); These lines define the event handlers for various events. It is not necessary to define the events manually. You can actually define the event handlers using Glade. In that case, when Autoconnect call is made it’ll automatically link up the events to the event handlers. Application.Run(); This starts the Gtk Application. GoogleSearchResult result = svc.doGoogleSearch("FEkTsl8kwgQtrQIeGHJDSELrTtjNcp1eC", entry1.Text,0,10,true,"",true,"","",""); This makes a web service call to the Google web service. The result contains the search results. The first parameter is your License key which you should have obtained from Google as described earlier in this article. To run this program in Linux, simply recompile the project as Release, then copy the googleSearch.exe file found in the bin\Release folder to the Linux machine. On the shell prompt type # mono googleSearch.exe This should start the program. Of course, you must have mono and gtk-sharp installed on the Linux machine. This hopefully introduces you to the basics of how to use Gtk# in .NET. For more details, API listings etc take a look at the references section. The Gtk# API is very close to the Gtk API since its mainly a wrapper around Gtk. Hence, you can use the Gtk API itself as a reference. For more tutorials on using Gtk# refer to the Mono Hand
http://www.codeproject.com/Articles/9308/A-google-search-application-using-Gtk?fid=144212&df=10000&mpp=10&sort=Position&spc=None&tid=2294309
CC-MAIN-2013-20
refinedweb
1,473
61.02
As I mentioned a few days ago, I chose JScript to script of the optimized PHP build process that I’ve built. JScript in-box on pretty much every modern Windows operating system, and provides a great deal of flexibility and benefits for a scripting language: - it’s syntax is C like. Very tasty. - it gives access to *a lot* of functionality via COM and WMI. - if you know enough about Windows Scripting Host and JavaScript, you can accomplish darn near anything if you want it bad enough. - JScript’s regular expressions. While not the universe’s most powerful, they are certainly an integral part of the language. - Prototypes allow you to do things to classes that can significantly boost productivity. It however does lack a few of the basic things that I’d like to see in a batch* scripting language: - an #include mechanism. - Easy interaction with environment variables. - Interaction and leveraging of external processes - Analogs to the built in command-line functions like DIR, MKDIR, ERASE, RMDIR, etc. * And, by “batch” scripting, I mean the scripting of external commands and programs to automate something that would otherwise be done by hand. I was thinking about all of this over the last few months and started experimenting. Along the way I came up with a basic library of functions that address the deficiencies in a rather clever way. First, let’s fix the lack of #include JScript (well, and VBScript) really didn’t do us any favors by not supplying us with the ability to reuse code in a simple fashion (And yes, I know about .WSC components, and I’m not keen on how *that* turned out. Ask me about that again later some time) Anyway, #including another JScript file is pretty easy if you know what to do. Not pretty, but easy: //---[Test01.js]---------------------------------------------------------- // includes the scripting library eval( new ActiveXObject("Scripting.FileSystemObject").OpenTextFile("Scripting.js", 1, false).ReadAll() ); The eval function gives us the ability to just run code that we pass in at runtime. This will give us a few little bumps along the way later, but for the most part, is pretty darn good. How about those environment variables The WScript.Shell object has some methods that let us get at environment variables, but I wouldn’t exactly consider them “Script Friendly”. So, the first thing I did, was create a some basic functionality for exposing the environment as global variables. (Some of how this gets useful, comes a bit later.) //---------------------------------------------------------------------------- // Global variables var GLOBALS=this; var WSHShell = WScript.CreateObject("WScript.Shell"); var procEnvironment=WSHShell.Environment("PROCESS") GlobalInit(); // Loads the environment from into UPPERCASE variables in the global namespace. // each variable is prefixed with a $ function loadEnvironment() { env = CollectionToStringArray(procEnvironment); for(each in env) { var v= env[each]; if(typeof(v)=='string') if ((p = v.indexOf('=')) == 0 ) continue; else GLOBALS['$'+v.substring(0,p).toUpperCase()] = v.substring(p+1) ; } } // Sets environment variables with the all string variables in the global namespace // that start with a $ function setEnvironment() { for(each in GLOBALS) { var t = typeof(GLOBALS[each]); if(t =='string' || t=='number') { if( each.indexOf("$") == 0 ) { if( IsNullOrEmpty(GLOBALS[each]) ) procEnvironment.Remove(each.substring(1)); else procEnvironment(each.substring(1)) = GLOBALS[each]; } } } } // takes one of those funky-groovy COM collections and gives back a JScript // array of strings. function CollectionToStringArray(collection){ var result = new Array(); for( e = new Enumerator(collection); !e.atEnd(); e.moveNext() ) result.push(""+e.item()); return result; } // returns true if the string is null or empty // Yeah, I was thinking of c# when I wrote this. function IsNullOrEmpty(str) { return (str || "").length == 0; } // Our function for bootstrapping the required environment. function GlobalInit() { loadEnvironment(); } Now, we can easily access environment variables: //---[Test02.js]---------------------------------------------------------- // includes the scripting library eval( new ActiveXObject("Scripting.FileSystemObject").OpenTextFile("Scripting.js", 1, false).ReadAll() ); WScript.echo( "Path is :" + $PATH ); Next time, I’ll show how I added code to execute and capture other external commands, and show a few cool functions that make playing in JScript a bit simpler. PingBack from
https://blogs.msdn.microsoft.com/garretts/2009/05/08/using-jscript-as-a-batch-scripting-language-part-i/
CC-MAIN-2017-47
refinedweb
672
57.06
Querying a database is sometimes best done with hand-written SQL. Of course the trick is to find a way to avoid syntax and type errors at run time. This post will look at how Slick and doobie approach this problem. To keep things simple, we’re just going to look at one SQL statement: select "content" from "message" The type we want from executing this query will be some kind of Seq[String]. The table for the query is: create table "message" ( id serial primary key, content varchar(255) ); Given that the SQL is, in effect, an arbitrary hunk of text, we’d like to know: - Is the SQL valid? - Do the types in the SELECT (and therefore, the table) match the types we expect? And we want to know it sooner rather than later. Both Slick and doobie have an approach to this problem. Slick is, I suspect, reasonably well known as the database library in Typesafe’s stack. In version 3.0 it added support for type-checked queries. Perhaps less well known is doobie, which provides a “principled way to construct programs (and higher-level libraries) that use JDBC.” We think of it as the database layer in a Typelevel stack. Let’s look in turn, and how they let us discover problems with our SQL. Slick Slick supports arbitrary SQL via Plain SQL queries. Plain SQL is just one of the ways Slick allows you to access a database. But it’s the style we’re focusing on in this post. The support is via interpolators: sql and sqlu, which wrap a SQL statement, do the right thing to substitute in values safely, and convert values into Scala types. We’ve described this in Chapter 6 of Essential Slick. What’s new in Slick 3 is type-checked SQL, available via the tsql interpolator: val program: DBIO[Seq[String]] = tsql"""select "content" from "message"""" Note that this is constructing a query, not running it. To run it, we hand the query to an interpreter, and it gives an asynchronous result back. What’s interesting with our program is: - the syntax is checked at compile time; and - the types of the columns are discovered at compile time. To explore this, we can play with the query to see what happens if we screw up. First, if we change the query to also select the ID column… val program: DBIO[Seq[String]] = tsql"""select "content", "id" from "message"""" That’s a compile time type error: type mismatch; [error] found : SqlStreamingAction[Vector[(String, Int)],(String, Int),Effect] [error] required: DBIO[Seq[String]] [error] (which expands to) DBIOAction[Seq[String],NoStream,Effect.All] This is because I’ve declared the result of each row to be a String, but tsql has figued out it’s really a (String,Int). If I’d omitted the type declaration, my program would have the inferred type of DBIO[Seq[(String,Int)]]. So it’s going to be good practice to declare the type you expect for tsql. Let’s now just break the SQL: val program: DBIO[Seq[String]] = tsql"""select "content" from "message" where""" This is incomplete SQL, and the compiler tells us: exception during macro expansion: ERROR: syntax error at end of input [error] Position: 38 [error] tsql"""select "content" from "message" WHERE""" [error] ^ And if we get a column name wrong… val program: DBIO[Seq[String]] = tsql"""select "text" from "message" where""" …that’s also a compile error too: Exception during macro expansion: ERROR: column "text" does not exist [error] Position: 8 [error] tsql"""select "text" from "message"""" [error] ^ From those errors we know tsql is a macro. How is it getting the information it needs to do these checks? We have to give it a database connection. The connection is via an annotation on a class: import slick.backend.StaticDatabaseConfig @StaticDatabaseConfig("file:src/main/resources/application.conf#tsql") object PlainExample extends App { ... } The annotation is specifying an entry in a configuration file. That configuration file looks like this: tsql = { driver = "slick.driver.PostgresDriver$" db { driver = "org.postgresql.Driver" url = "jdbc:postgresql://localhost/chat" username = "richard" password = "" connectionPool = disabled } } (Note the $ in the class name is not a typo. The class name is being passed to Java’s Class.forName, but of course Java doesn’t have a singleton as such. The Slick configuration does the right thing to load $MODULE when it sees $. These shenanigans are described in Chapter 29 of Programming in Scala.) A consequence of supplying a @StaticDatabaseConfig is that you can define one databases configuration for your application and a different one for the compiler to use. That is, perhaps you are running an application, or test suite, against an in-memory database, but validating the queries at compile time against a production-like integration database. It’s also worth noting that tsql works with inserts and updates too: val greeting = "Hello" val program: DBIO[Seq[Int]] = tsql"""insert into "message" ("content") values ($greeting)""" At run time, when we execute the query, a new row will be inserted. At compile time, Slick uses a facility in JDBC to compile the query and retrieve the meta data without having to run the query. In other words, at compile time, the database is not mutated. doobie Both doobie and Slick 3 use similar patterns for executing a query – in fact, doobie was the first database technology I saw doing this. Queries are represented using our friend the free monad and interpreter that Noel has been describing in recent posts. We’re just looking at the query checking part of doobie here. The excellent book of doobie is the place to go to learn more about the whole project. The select query we’ve been using in the post looks like this in doobie: val query: Query0[String] = sql""" select "content" from "message" """.query I’ve given the type declaration for clarity, although you might write .query[String] instead (which reads better to my eyes). In terms of checking this query, doobie gives us a check method: val xa = DriverManagerTransactor[Task]( "org.postgresql.Driver", "jdbc:postgresql:chat", richard, "" ) import xa.yolo._ query.check.run This outputs: select "content" from "message" ✓ SQL Compiles and Typechecks ✕ C01 content VARCHAR (varchar) NULL → String - Reading a NULL value into String will result in a runtime failure. Fix this by making the schema type NOT NULL or by changing the Scala type to Option[String] This is telling me I forgot to add a NOT NULL constraint on my PostgreSQL schema. Fixing that problem ( alter table "message" alter column "content" set not null) gives a clean bill of health: select "content" from "message" ✓ SQL Compiles and Typechecks ✓ C01 content VARCHAR (varchar) NOT NULL → String Now check is a run-time check, which is a bit too late to be learning about possible problems. What doobie provides is a way to execute checks as tests. This is set out in chapter 11 of the book of doobie, but here’s a quick example: import doobie.contrib.specs2.analysisspec.AnalysisSpec import org.specs2.mutable.Specification object Queries { val allMessages = sql""" select "content" from "message" """.query[String] } object AnalysisTestSpec extends Specification with AnalysisSpec { val transactor = DriverManagerTransactor[Task]( "org.postgresql.Driver", "jdbc:postgresql:chat", "richard", "" ) check(Queries.allMessages) } Here we’re using doobie’s add on for specs2 to perform analysis of a query. Note that I’ve changed the query to be a value in an object. Pulling queries out into some kind of module is going to be good practice if you’re using this style of query checking. Notice, as with Slick, we’re providing database connection information that could be different from the database we’re developing against. You can probably test against multiple databases, if that’s useful to you. We can run our test suite as we usually would: > test [info] Compiling 1 Scala source to target/scala-2.11/test-classes... [info] AnalysisTestSpec [info] [info] Query0[String] defined at query-specs.scala:9 [info] select "content" from "message" [info] + SQL Compiles and Typechecks [info] + C01 content VARCHAR (varchar) NOT NULL → String [info] [info] Total for specification AnalysisTestSpec [info] Finished in 25 ms [info] 2 examples, 0 failure, 0 error As you might imagine, “+ SQL Compiles and Typechecks” fails if you have a typo in the SQL, incorrect column names, or the types don’t align. Here’s one example where I’ve said I expect a String from a query, but selected the id column: [info] Query0[String] defined at query-specs.scala:9 [info] select "id" from "message" [info] + SQL Compiles and Typechecks [info] x C01 id INTEGER (serial) NOT NULL → String [error] x INTEGER (serial) is ostensibly coercible to String according to the JDBC [error] specification but is not a recommended target type. Fix this by changing the [error] schema type to CHAR or VARCHAR; or the Scala type to Int or JdbcType. (query-specs.scala:9) The test fails, which is what we want. Conclusions I find it easier to think about queries in terms of SQL than alternative formulations. However, I’ve tended to avoid using straight SQL in a project because it’s so easy to introduce an error when changing code. But here we have two projects offering great opportunities to remove that risk. Both doobie and Slick are using the same mechanisms (prepared statements and JDBC meta data). The routes taken at the moment are different, focusing on analysis and test-time checking (doobie) and compile-time checking (Slick). If you want to try out the code in this post, I’ve created a Github project for you.
http://underscore.io/blog/posts/2015/05/28/typechecking-sql.html
CC-MAIN-2017-26
refinedweb
1,606
61.56
Details Description CalciteSchema by default uses cache to store table, sub-schema, function. This would work perfectly for schema-based system, yet would create problem for Drill, which dynamically explore the schema on the fly during query execution. One solution is to refactor CalciteSchema and make it as an interface. The default implementation would still use the current implementation. Further, it would other system to extend the default behavior and make CalciteSchema works for Drill as well. Background information: The issue around CalciteSchema is one of the reasons that Drill has to use a forked version of Calcite. Hopefully, if we could resolve this issue, we are one step further to remove the forked Calcite in the near future. Issue Links - incorporates CALCITE-1742 Create a read-consistent view of CalciteSchema for each statement compilation - Closed - relates to DRILL-4093 Use CalciteSchema from Calcite master branch - Open Activity - All - Work Log - History - Activity - Transitions Julian Hyde, thanks for reviewing and merging to Calcite! I'm really glad to see we finally got rid of one of the factors for Drill to use a forked version. We will continue to work on the rest two as listed in DRILL-3993, so that Drill does not use forked version. Fixed in. Thanks for the PR, Jinfeng Ni! Julian Hyde, I uploaded the revised patch based on your comments. 1. Make SimpleCalciteSchema / CachingCalciteSchema package-protected. Move CreateRootSchema() to CalciteSchema. Add a boolean flag to indicate which version is to be created. By default, it will create the original one. 2. Modify howto.md for CalciteRootSchema. Also specify when CalciteRootSchema is to be removed 3. Eliminate the common code in SimpleCalciteSchema and CachingCalciteSchema. The common code is moved to abstract class. Declare couple of abstract methods to handle the implicit table/subschema/functions. Implement those functions in two sub-classes. 4. In stead of returning empty implicitSubSchemaCache, implicitTableCache and implicitFunctionCache for SimpleCalciteSchema, the implemented functions in SimpleCalciteSchema do not reply on the caches; it get things required directly from 'schema'. 5. The original CalciteSchema code seems to have a bug in getSubSchemaMap() and getTablesBasedOnNullaryFunctions(), there are naming conflicts between explicit / implicit subschemas. Unlike ImmutableSortedSet.Builder, ImmutableSortedMap.Builder would not handle the duplicate keys and just throw Exception. (/** - Associates {@code key} with{@code value} in the built map. Duplicate - keys, according to the comparator (which might be the keys' natural - order), are not allowed, and will cause {@link #build} to fail.{ entries.add(entryOf(key, value)); return this; } */ @Override public Builder<K, V> put(K key, V value) ) I modify the code to handle the duplicate names, and add a unit test case for naming conflicts. 6. For now, I feel the case of hybrid tree of schemas might be not needed. The normal Calcite code would use the caching version, while Drill would use the non-caching version. Therefore, I did not add boolean flag to CalciteSchema.add() method. If we late find it's necessary, we can add that. 7. Add a unit test for SimpleCalciteSchema. I run some Drill testcase with the new patch. Seems to be fine, after resolving one issue in Drill's JdbcSchema. I'm going to run the whole test suite, and update the results shortly. Please let me know if you see any problem in the modified patch. Thanks! I'm working on it right now. I'll post a patch this afternoon or tonight. Jinfeng Ni, Can you get something done today? Say, responses to the review comments and a rough cut of the code. I doesn't matter if the code isn't fully working. I can take it from there, and we can start a release tomorrow. SimpleCalciteSchema and CachingCalciteSchema are the only two implementations of CalciteSchema we need, right? It's not in Calcite's interests to support this as a public API or SPI. Even most Calcite code can use the base class. So let's lock this down. Make SimpleCalciteSchema and CachingCalciteSchema and their constructors package-protected. You'll have to move the create methods to CalciteSchema, maybe with a boolean caching parameter. - Need to modify howto.md (references CalciteRootSchema) - In CalciteRootSchema say when will be removed - Add a test for SimpleCalciteSchema. It is currently not instantiated anywhere in Calcite. - Would there ever be a hybrid tree, say with a simple root and caching sub-schema? If so the CalciteSchema.add method could have an optional boolean caching parameter. - A few methods have javadoc even though they implement/override a method with the same javadoc - please remove - There's a lot of common code between SimpleCalciteSchema and CachingCalciteSchema. Could you eliminate it all? Say create an instance of CachingCalciteSchema where implicitSubSchemaCache, implicitTableCache and implicitFunctionCache always return an empty cache. Julian Hyde and Jacques Nadeau, I revised the patch based on the comments. The new patch changed CalciteSchema to be an abstract class. The default one is CachingCalciteSchema, which uses cache for implicit subschema, tables and functions. The new one, SimpleCalciteSchema, shares the same datas structure/code for the explicit subschema, tables and functions (previous patch used a different structure / code), but does not use cache for implicit subschema, tables and functions. (I could refactor the code such that the code accesses the explicit subschema etc is moved to abstract class. But I want to have you take a look to see if the CachingCalciteSchema/SimpleCalciteSchema looks fine or not first, as it is easier to see the changes) The reason that Drill could not use cache is that Drill dynamically explores the list of tables. For example, for a file system schema, Drill will not list all the files/ tables under that schema initially. Only when the table is queried with, it's added into the table list for the file system schema. That's different from the normal CalciteSchema, where the table list is pre-determined. Also, per Julian's suggestion, I depredated CalciteRootSchema and added checking code at places where the rootSchema is used. Please take another look at this patch and see if there is any problem. Link to the PR: BTW: I run Drill's regression with the patch, and did not see any regression. Jinfeng Ni, do you think there will be an update patch soon so we can get this into v1.5? I am leaning towards approach number 1, where CalciteSchema is an abstract class with two concrete sub-classes. Thanks for clarifying about getTable - we should definitely keep the return type as TableEntry. Would this approach work for you guys? Regarding SchemaPlus and SchemaPlusImpl. SchemaPlus is intended to be used by users – it appears as a parameter in several SPI methods – but not instantiated by them. Users should only use the SchemaPlus they are given by the system. In fact the purpose of SchemaPlus was to expose to user code, in a read only manner, some of the information in CalciteSchema. It is analogous to java.sql.Connection.rollback(Savepoint), which can only be called with a Savepoint created by the same Connection. The first commit, Jacques Nadeau changed CalciteSchema to abstract class, and provided two version of extensions: caching and noncaching. The second commit is change CalciteSchema to interface, and move code around abstract class and Impl. The 3rd commit is adding a non-caching implementation, based on the interface defined in the second commit. The comment about return type of getTable(). I guess you are referring to Jacque's commit, which happened before Calcite changes the return type from Pair<String, Table> to TableEntry [1]. The second commit actually change it to the TableEntry. [1]. As I said yesterday, I tried a different approach. Turns out it did not work. Basically, I want to avoid using CalciteSchema in Drill (Suppose CalciteSchema is not part of public API, and should not be used in Adapter like Drill??). The reason Drill originally has to use CalciteSchema is to get an instance of SchemaPlus, and pass that SchemaPlus to FrameworkConfig as the defaultSchema. With this idea, I implemented DrillSchemaPlus, an implementation of interface SchemaPlus, and maintain the sub-schema / tables etc in DrillSchemaPlus. However, I hit ClassCastException, as couple of places in Calcite core code expects instance of CalciteSchema$SchemaPlusImpl, in stead of any implementation of SchemaPlus. [1] [2] public static CalciteSchema from(SchemaPlus plus) { return ((SchemaPlusImpl) plus).calciteSchema(); } I'm not sure whether SchemaPlus is part of public API, or each different project could use different implementation of SchemaPlus. But seems the other part of Calcite closes such possibility, as people has to use CalciteSchema to get an instance of SchemaPlus (Does it make CalciteSchema not only used in Calcite internally, but also in other projects, thus violates the comments made in CalciteSchema??) Anyway, for now, I do not see a way to avoid using CalciteSchema in order to use Calcite's Frameworks code. [1] [2] I think the approach of having variants of CalciteSchema that do and do not cache will work. Especially if other projects are not allowed to sub-class them. I'm still reviewing, and trying to figure out the relationship among the various commits. Would we need all of the commits? My gut tells me this should be a fairly small change. And by the way we should obsolete CalciteRootSchema and just 'assert schema.isRoot()' in the places that formerly took a CalciteRootSchema. I don't yet see a good reason for changing the return type of getTable and other methods from TableEntry to Pair<String, Table>. Let me try another approach in Drill side to see if we can go without using CalciteSchema. Will update the result shortly. Whether we put it in a public API is not the really purpose of this patch (I guess that's a different topic). All we want is make sure Drill can continue to use Calcite, so that we do not have to keep our forked version. As Jacques Nadeau said, Calcite's code caused breaking change for Drill around 0.9 release. We proposed two solutions to solve this :1) either Calcite provide two implementations of CalciteSchema, 2) Calcite allows people to implement according to the public API (which I guess you do not want to put in public API). What's your recommendation? Personally, I do not care which solution we use, as long as Calcite does not break Drill. Jinfeng Ni, Read the comment at the top of CalciteSchema: Wrapper around user-defined schema used internally. In Java many things need to be public so they can be used by other packages. That does NOT make them a public API!! Let's get this resolved for 1.5. Julian Hyde, do you have preference for which approach from above to take? It seems like removing the caching behavior from the existing calcite schema class is the least intrusive/impactful. I added back the SimpleCalciteSchema in the pull request. This is the implementation of CalciteSchema for Drill only; Drill currently does not use features like lattice, therefore some functions imply throw unsupported exception or return an empty list. I'll post a patch which includes the SimpleCalciteSchema. As Jacques Nadeau said, the caching code in CalciteSchema breaks almost every query in Drill. If we could have a different implementation without the caching, then it probably will solve our problem. That's why I move code around interface / abstract class, to make CalciteSchema to have different implementation. I'm not quite follow about Julian Hyde api concern : the method defined in the new interface are mostly public methods in CalciteSchema, which is public class. Why is such change regarded as increase api change ( public methods in public class vs. methods in interface). I assume user of Calcite would see those methods in either cases. Jinfeng Ni, I wonder if you can post an updated patch that includes the SimpleCalciteSchema. The goal isn't really to create a larger api surface area. The main issue we're hitting against is that CalciteSchema was changed around 1.0 so that it cached a huge amount of additional state internally. This was a substantial breaking change for Drill. Without major refactoring in Drill, we couldn't have the schema caching a bunch of state. As such, we needed to basically maintain the old implementation of Schema which was minimally stateful. This patch achieves that goal. To get off the fork, we need to either have two implementations, expose an api so we can have our own implementation or remove the caching. Any of the options seems fine. I probably the last since this can then be left to framework implementers (and keeps code as simple as possible). I think it is very important for interoperability (e.g. the Phoenix integration) so let's figure out the best we to solve this. My concern about this patch is the huge increase in API surface area. There are a lot of classes and interfaces that include the word "schema" and it's frankly a rat's nest. You didn't create the rat's nest: I created Schema and SchemaFactory and SchemaPlus and SchemaPlusImpl and CalciteSchema. But the key part of that design were the words "Used internally" against CalciteSchema. And you want to make that implementation public. So, I think this patch is taking us in the wrong direction. Can we start off with your requirements? I believe your requirement is to have a representation of schema that knows its structure (name, parent schema, child schemas, child objects), you build yourself, and does not need to be re-created for every request. I submit PR at: It contains two commits. The first one is the original one made by Jacques in Drill's forked Calcite. I did some refactoring work in the second commit, such that 1) CalciteSchema becomes an interface, 2) CalciteAbstractSchema contains the implementation for either caching version or non-caching version, 3) The origianl caching version extends from CalciteAbstractSchema and is put into CalciteSchemaImpl. I removed SimpleCalciteSchema in Jacques's commit. I intended to put it in Drill's code by extending from CalciteAbstractSchema, as I think different system should be able to provide different implementation of CalciteSchema interface itself, if it wants a different behavior than the default CalciteSchemaImpl one. If the above logic makes sense, then it seems reasonable for each system to add test for different implementation and Calcite should only have test for the default one : CalciteSchemaImpl. Jacques Nadeau and Julian Hyde, could you please take a look at the PR? It makes sense to get this up with tests against it. If some consumers of Calcite want to manage caching on their own, we shouldn't force them to have to fork Calcite. Resolved in release 1.5.0 (2015-11-10)
https://issues.apache.org/jira/browse/CALCITE-911
CC-MAIN-2017-43
refinedweb
2,457
64.91
Myths About Code Comments timothy posted more than 4 years ago | from the myth-conceptions dept. . (5, Insightful) yog (19073) | more than 4 years ago | (#30616696). (3, Insightful) Brian Gordon (987471) | more than 4 years ago | (#30616736). Re:One person's myth is another person's fact. (0) Anonymous Coward | more than 4 years ago | (#30616768) Mind == blown. Re:One person's myth is another person's fact. (3, Insightful) Nutria (679911) | more than 4 years ago | (#30616944) Sure. It's a damned fast way to multiply or divide by a power of 2. I'd never do it in a DP shop, though... Re:One person's myth is another person's fact. (0) Anonymous Coward | more than 4 years ago | (#30616970) If this concept is hard for you, you're in a wrong business. Get out while you can. You made his argument for him (2, Insightful) dreamchaser (49529) | more than 4 years ago | (#30616934) Re:You made his argument for him (4, Insightful) DG (989) | more than 4 years ago | (#30617178) Re:One person's myth is another person's fact. (1) DrLang21 (900992) | more than 4 years ago | (#30617156) I think that a myth of software development is that every line of code should always be simple and easy to understand. No one here is saying that. They're just saying that you should comment it, so that if it is complex, the next guy can figure out what you did without spending days thinking about it. Re:One person's myth is another person's fact. (5, Informative) v1 (525388) | more than 4 years ago | (#30616748):One person's myth is another person's fact. (2, Insightful) MichaelSmith (789609) | more than 4 years ago | (#30616750):One person's myth is another person's fact. (2, Informative) innocent_white_lamb (151825) | more than 4 years ago | (#30616838).) Thank you.. (0) Anonymous Coward | more than 4 years ago | (#30617002) ..very much for mentioning this. Screenshots so far look very promising. ---> Fresh installation of devel/geany started at: Sat, 02 Jan 2010 00:07:31 +0100 ---> Installing 'geany-0.18' from a port (devel/geany) Re:One person's myth is another person's fact. (2, Insightful) Tanuki64 (989726) | more than 4 years ago | (#30616984). Code format (1) omb (759389) | more than 4 years ago | (#30617176) Personally I hate the KR if (foo) { } style. I like { } An 80 column line limit is also OLD, I never edit in narrow windows >160 common. drop, but I cant always get what I like. Re:One person's myth is another person's fact. (3, Insightful) khallow (566160) | more than 4 years ago | (#30616790) Re:One person's myth is another person's fact. (3, Insightful) 93 Escort Wagon (326346) | more than 4 years ago | (#30616928):One person's myth is another person's fact. (1) TheRaven64 (641858) | more than 4 years ago | (#30616814):One person's myth is another person's fact. (1) Draek (916851) | more than 4 years ago | (#30617092) If you can tell the two apart with 100% accuracy, then you can skip writing comments. Unfortunately, I've never met a developer that could. I can. To my own standards. The thing that everybody always misses, is that the difference between a "good" and a "bad" comment is entirely subjective. Sure, most people would agree that a "returns X" comment for a get_x() method is useless, and most would say that your average Perl one-liner must always be accompained by a suitable comment, but its in that nebulous middle ground where opinions are split, depending on the programmer, his team, their respective abilities and experience, the time-frames involved and myriad other factors. So most people try to guesstimate conservatively, and comment more than they feel they should. But even then, it only takes one unusual programmer or a stupid manager with the idea that "everything should be commented" no matter how trivial, and we get yet another "proof" of the average programmer's supposed arrogance and disdain for commenting their code. Re:One person's myth is another person's fact. (2, Insightful) hedwards (940851) | more than 4 years ago | (#30617122) Re:One person's myth is another person's fact. (2, Insightful) WED Fan (911325) | more than 4 years ago | (#30616840) Re:One person's myth is another person's fact. (1, Insightful) SanityInAnarchy (655584) | more than 4 years ago | (#30616940)... Do you really want that to be commented? Think about it -- I bet you understand what that does without a single comment, without even knowing the language or the framework. Comments only make it longer, harder to maintain, and less readable, when your code looks like that. Re:One person's myth is another person's fact. (2, Insightful) SteveWoz (152247) | more than 4 years ago | (#30616942):One person's myth is another person's fact. (1) buddyglass (925859) | more than 4 years ago | (#30616948) Re:One person's myth is another person's fact. (0) Anonymous Coward | more than 4 years ago | (#30617028) Yes. Many programmers (most, I would wager) are not hot shots. Many of them may think they are, but they're not. Maybe the percentage of "meh" is even 99%; who knows. But surely you do not believe that EVERY person is mediocre. The problem I have with the view you have presented (and I mean nothing personal by this) is that you've essentially mandated that everyone dumb themselves down to some arbitrary level. How will the true geniuses ever shine if we require everyone to be of the same brightness? And no, I don't consider myself to be one of the geniuses who should be excused from the normal policies, I just think that if we allow this kind of thinking to take root, we'll be selling ourselves short (as a company, or country, or species, take your pick) of what we could otherwise achieve. Sure, that will make it harder as the number of people increases (as you have a tougher time picking out the true geniuses from the rest), but I do not think we should be using "finding real talent is hard" as an excuse to make "just enforce mediocrity" our policy. Re:One person's myth is another person's fact. (1, Informative) Anonymous Coward | more than 4 years ago | (#30617042). Re:One person's myth is another person's fact. (2, Interesting) b4dc0d3r (1268512) | more than 4 years ago | (#30617174).." It's also a warning that your team needs to put everything down and go read other people's source code. If you know how other people write code, you might agree or not with the way it was done, but it certainly opens you up to new ways of doing things. It also makes it easier to read uncommented code, or well-commented code for that matter. Something as simple as looping through an array and doing something with the data barely requires a comment - anyone maintaining the code should have an idea how loops should work. If you do something like start at index 1 of a zero-based array, that requires explanation. Otherwise, these things are obvious or you shouldn't be paying the maintainer. It's clear you will play the bureaucrat, requiring all red tape to be completed in triplicate before one may use the stapler. You will never pay me because I will never work for you. If you're going to manage people, you have to understand how they work, and even if you think you do, even if you were once in that position, it's obvious you either forgot or learned in a different era. Think about this for a while. Every compiled application out there is completely documented. You can disassemble anything and figure out how it works. If your coders have that mentality, hints like variable and function names, indentation, and clearly indicated array access are a luxury. If your coders are not thinking this way, they are write-only monkeys and deserve no employment. You might think I don't comment my code, but I very much do. If you see a comment in my code, it's because it's IMPORTANT, not because it's REQUIRED. The IMPORTANT stuff gets buried in the noise of what's REQUIRED, and no one can tell a difference. You have a low user id, and your journal consists of complaining about slashdot moderation. I'm guessing you're old, having moved from coding long ago to managing a team, and maybe past that. You have little tolerance, despite what you claim in some of your other comments, and like to have things your way because that's what makes sense to you. Imposing your methodology on others isn't the best way to get things done - understanding why people do things the way they do, why their system makes sense to them, is. What you're suggesting is an absolutist solution in which there is no "why", no reason or rationale, it's "just because." That has no place in the real world, and maybe neither do people like you. Anonymous Coward | more than 4 years ago | (#30616700) Re:No Comment (0) Anonymous Coward | more than 4 years ago | (#30616718) Microsoft is a great example of why commenting code is a good idea Cliche, but true... (5, Insightful) Xaroth (67516) | more than 4 years ago | (#30616716) Clean code tells you how, good comments tell you why. Fixing the "how" becomes significantly easier when you know "why" the code was there in the first place. Re:Cliche, but true... (1, Insightful) Anonymous Coward | more than 4 years ago | (#30616758) This is absolutely true and even how becomes more important with lower-level languages where the code probably will not imply the intended outcome. Try figuring out clever assembly code without a running commentary; it's not fun. Re:Cliche, but true... (2, Insightful) Brian Feldman (350) | more than 4 years ago | (#30616850):Cliche, but true... (1) Nerdfest (867930) | more than 4 years ago | (#30616912) Re:Cliche, but true... (1) PakProtector (115173) | more than 4 years ago | (#30617054) C is a high level language. Anything more complex is balderdash and chicanery. Re:Cliche, but true... (3, Insightful) Wraithlyn (133796) | more than 4 years ago | (#30616896)... (1) Drethon (1445051) | more than 4 years ago | (#30617166) Re:Cliche, but true... (3, Interesting) BikeHelmet (1437881) | more than 4 years ago | (#30617216). Over documentation is good (1) nurb432 (527695) | more than 4 years ago | (#30616722) (5, Interesting) fdrebin (846000) | more than 4 years ago | (#30616744):Over documentation is good (0) Anonymous Coward | more than 4 years ago | (#30616796) God yes.... A system I work with now has 29 serial links (modbus) to various places... the links and bus registers etc in external devices are ONLY documented in the header files of the C code used on embedded controllers... Trying to now virtualize this fecking mess is a major pain in the ass as we have to sit digging through header files and make maps of registers in use by hand... By hand because the code format is not at all consistent, neither is naming convention... Writing a parser turned out to be a waste of time since the header format will never be used again. *sigh* Kill me.... Oh, and the code has been updated but not the comments so you have gems like // register 40914 a = 30913; etc... kill meeeeee Re:Over documentation is good (2, Insightful) jocabergs (1688456) | more than 4 years ago | (#30617004) Wrong on all accounts (5, Insightful) smartin (942) | more than 4 years ago | (#30616728):Wrong on all accounts (1, Informative) MichaelSmith (789609) | more than 4 years ago | (#30616808) (0) Anonymous Coward | more than 4 years ago | (#30616908) They can suck my sweaty balls. Some asshat believes he is too good to comment his code? I don't have the time to waste deciphering his spaghetti rats nest of global variables and copy/paste flow control. Re:Wrong on all accounts (0) Anonymous Coward | more than 4 years ago | (#30616964) Re:Wrong on all accounts (3, Interesting) CAIMLAS (41445) | more than 4 years ago | (#30616810) (5, Insightful) TheRaven64 (641858) | more than 4 years ago | (#30616832) (4, Informative) tomhath (637240) | more than 4 years ago | (#30616846):Wrong on all accounts (2, Interesting) SerpentMage (13390) | more than 4 years ago | (#30616880) (2, Insightful) Nerdfest (867930) | more than 4 years ago | (#30616976) Re:Wrong on all accounts (3, Interesting) Nerdfest (867930) | more than 4 years ago | (#30616982) Has No One Actually Studied This? (4, Insightful) Greg Hullender (621024) | more than 4 years ago | (#30616762)? (0) Anonymous Coward | more than 4 years ago | (#30616826) Bonus points to the people who write their comments in clear grammatically correct prose. Re:Has No One Actually Studied This? (1, Funny) Anonymous Coward | more than 4 years ago | (#30616854) In thirty years of programming, I've heard one person after another argue about how to comment and how much to comment, but what I've never seen is any kind of serious study attempting to measure what actually works best. // TODO: Implement mathematically optimal commenting Re:Has No One Actually Studied This? (5, Insightful) tomhath (637240) | more than 4 years ago | (#30616962) (3, Insightful) MichaelSmith (789609) | more than 4 years ago | (#30617116) The code which is business critical and necessarily complex gets commented a lot as a mitigation measure. But it still has problems, often because the business requirements change a lot or are poorly defined. Commenting "why" (2, Informative) Chris Newton (1711450) | more than 4 years ago | (#30617060). More verbose == less readable? (4, Insightful) nmb3000 (741169) | more than 4 years ago | (#30616770):More verbose == less readable? (1) mpc92 (302291) | more than 4 years ago | (#30616954) I find over and over that I'd rather have 10 simple lines over 1 tricky line for one simple reason: invariably one of the tricky actions holds the key to a bug, and I want to have more precise control over where my breakpoint is (which method call in the tricky line is throwing that null reference exception anyhow?). Re:More verbose == less readable? (1) buddyglass (925859) | more than 4 years ago | (#30617014) (3, Insightful) Anonymous Coward | more than 4 years ago | (#30617056). Re:More verbose == less readable? (1) pjt33 (739471) | more than 4 years ago | (#30617140) Unless you're doing it for absolutely required performance reasons Or numerical reasons. Sometimes it happens that the 3-line version is algebraically equivalent to the 10-line version, but more stable numerically. In this case you may need to replace 3 lines of comments with 20 lines sketching the outline of the 10-line version and giving a proof of correctness of the 3-line version. In other news... (4, Insightful) CAIMLAS (41445) | more than 4 years ago | (#30616776):In other news... (1) mewshi_nya (1394329) | more than 4 years ago | (#30616830) sentences linked into a meaning into a paragraph into a story! Brilliant! Actually, I really like this post. Someone, please mod this up. If more people wrote simpler code, it would be easier to read on my end, as a non-programmer Re:In other news... (0) Anonymous Coward | more than 4 years ago | (#30617022) undocumented code is unprofessional. unprofessionals dont get hired often or keep their jobs for long. if you work for me, you document your code, cause unless specifically agreed to beforehand, your code is part of a product belonging to my company, and i will not be screwed over down the road cause you left, and the code is nigh undecipherable except by the one who wrote it. Re:Shorter sentences (1) paylett (553168) | more than 4 years ago | (#30617126). Complexity of that last sentence: * 52 words, * 17 punctuation characters, * 5 parentheses blocks, * at least half a dozen ideas. But I'm just having fun. I agree with you 100%. Blame my professor (1) Quila (201335) | more than 4 years ago | (#30616804) I had excessive commenting drilled into me in college. It's one of the things I've had to unlearn. Anonymous Coward | more than 4 years ago | (#30616820)." Please no (5, Insightful) dachshund (300733) | more than 4 years ago | (#30616836):Please no (1) MichaelSmith (789609) | more than 4 years ago | (#30616946):Please no (1) gnasher719 (869701) | more than 4 years ago | (#30617088)... The guy is completely illogical. Next you review some of his code, and you insist that all his code should be single spaced (no blank lines between lines of code) because you have trouble understanding the code if it doesn't fit on a single screen. What is he going to do then? Re:Please no (0) Anonymous Coward | more than 4 years ago | (#30617206) This attitude is the same as the original post. 'my code is perfect and every should conform to what I do if you do not like it here are some hoops to jump thru to be like me'. The guy asked you to match what the rest of the team was doing and you came back with a snide remark. Instead of showing him why that style is 'worse'/'better'. The reason for doing these things is not always for yourself. It is for other people to help you out. Your not as good as you think you are. You are also not always around or will be around. I have dealt with programmers such as you many times. MANY times everyone else ends up fixing the mess you make. Then you turn around thinking they are all full of crap. I am a pretty good programmer. However, *THE* highest complement I ever get from other programmers is 'your code is easy to read and the comments are always spot on.' When writing code you are writing for 3 groups. 1) the compiler 2) yourself and 3) most important your team mates. For 1 you could write cryptic ass crap and the computer would figure it out. For 2 you could also write cryptic ass crap and you MIGHT figure out today but what about 2 months from now? For 3 you can NOT write cryptic ass crap. You *MUST* communicate to them what you are doing. If this means double spacing your code then so be it. BTW after 20 years of coding I have found double spacing is most readable. They help you make your code look good and easier to read. It looks 'funky' at first but you quickly come to like it. When its 2 in the morning and I have to tweak some code I do not want to have to worry about what options *YOU* were using in your editor and making my options match. You will have style fights and you will loose these battles, I have lost/won many. I have worked on projects where everyone did 1 style. Those projects were successful. I have also been on many projects where 'we are all professionals and write however we like' and those all failed. The reason they failed? The team was not a team it was a group of 'hot shots'. One of the first rules of my projects are 'use the style of the group or find another group'. This helps promote teamwork in a weird way. It also helps others quickly find bugs as the code will just 'look weird'. This is one of the most important tools you will to writing 'good' code. Do not throw it away because you think you are 'better' than everyone else. Re:Please no (1) The_reformant (777653) | more than 4 years ago | (#30617030) I). Re:Please no (1) farnsworth (558449) | more than 4 years ago | (#30617182) Christ, I know everyone has their own personal style and everything, but this is just pernicious. Every is entitled to their own personal style, but context is also very important. If you are writing a math-heavy algorithm in c, comments are crucial, both at the API level and within the implementation. On the other hand, if you are writing a webapp in c#, an organizational mandate to "document every public method and public class" can be a huge waste of time if not harmful. Lies or Wishful Thinking (1) KalvinB (205500) | more than 4 years ago | (#30616844) (2, Insightful) mandelbr0t (1015855) | more than 4 years ago | (#30616856):Choice of Language (1) Nerdfest (867930) | more than 4 years ago | (#30616898) I am NOT a coder... but... (1) GuyFawkes (729054) | more than 4 years ago | (#30616862) line 10, the comment may be very useful, aha, that's why this works on X machine but not X machine, etc. Does he think comments are pseudocode? (4, Informative) EsJay (879629) | more than 4 years ago | (#30616868) If your comments are that detailed, you're doing it wrong. Re:Does he think comments are pseudocode? (2, Interesting) presidenteloco (659168) | more than 4 years ago | (#306169, and renders your code library low quality in one stroke. Re:Does he think comments are pseudocode? (1) Arimus (198136) | more than 4 years ago | (#30617082) Depends, if you are using doxygen style comments and change the function header then the comment changes... Re:Does he think comments are pseudocode? (1) noidentity (188756) | more than 4 years ago | (#30617188) No problem, just add some comments to explain the details. So who is this guy? (5, Insightful) anti-NAT (709310) | more than 4 years ago | (#30616872):So who is this guy? (0) Anonymous Coward | more than 4 years ago | (#30617012) slow news day. new here? Re:So who is this guy? (0) Anonymous Coward | more than 4 years ago | (#30617120) IOW, what makes special enough to be Slashdot front page news? IT'S BOOKS 2.0 Re:So who is this guy? (1) oogoliegoogolie (635356) | more than 4 years ago | (#30617196) I have no idea who this guy is either. Can someone fill us in? Compare / contrast. (1) fahrbot-bot (874524) | more than 4 years ago | (#30616884)" (4, Insightful) lax-goalie (730970) | more than 4 years ago | (#30616920) (5, Funny) caywen (942955) | more than 4 years ago | (#30616932):every line of code should be commented (1) mandelbr0t (1015855) | more than 4 years ago | (#30617124) Not commenting is a sign of not thinking or caring (3, Insightful) presidenteloco (659168) | more than 4 years ago | (#30616936)... (1) Restil (31903) | more than 4 years ago | (#30616958) for any comments at all. Within a function, if you only have nested loops one deep, there's probably no need to comment each closing bracket. But if you have some horrendous 12 dimensional loop with lots of if statements, it might save a few headaches to know exactly which bracket lines up with which routine. It's just a matter of commenting what is necessary, and avoiding it when it's not. -Restil Employment prospects (1) paylett (553168) | more than 4 years ago | (#30617038) Strongly RESTRICT Code Commenting (2, Insightful) omb (759389) | more than 4 years ago | (#30617052)! I can answer that one! (1) Zero__Kelvin (151819) | more than 4 years ago | (#30617072) The reality is people might know more than you, and you are assuming that you are an uber-programmer when you are not! Maybe the programmer is more qualified than you to write Python (the language the example is in in TFA) and uses pylint. Maybe he doesn't want to sift through 272 warning messages to see which are important and which are superfluous, so he is consistent. Maybe consistency is a good thing in the software domain. Maybe he also wants every member method to be consistintly documented since docstrings are tied in memory to the object and available via help(object). I am encouraged by most of the comments about comments here on Slashdot thus far. Most people seem to get why comments are a good thing. The author doesn't write them off completely of course, but he is not qualified to write this article either. There is just too much he doesn't yet understand, and he would do well to ask these questions of a qualified software engineer rather than posing them as rhetorical, since in a lot of cases he doesn't know the answers but mistakenly thinks he does, as I have shown. There is a reason why language designers pretty much universally include a comment mechanism, and why that mechanism has been trending toward the docstring / formal approach away from the "willy nilly" techniques of old rather than removing the ability to comment completely. Useful comment: (1) Albert Sandberg (315235) | more than 4 years ago | (#30617108) // Message below is written by an idiot, jsyk This guy is WRONG. (1) sootman (158191) | more than 4 years ago | (#30617110) "documenting something that needs no documentation is universally a bad idea" -- yeah, because everyone in the world has the exact same experience and knowledge, so something that's obvious to one person is obvious to the rest of the world and vice-versa. And nobody ever changes, learns new things, or uses things that are obvious now but will be completely forgotten after five years of not using that particular construct. This guy's opinions are worth less than the comments he eschews. compares to all professions (4, Insightful) DaveGod (703167) | more than 4 years ago | (#30617158) (2, Interesting) BigBadBus (653823) | more than 4 years ago | (#30617190) Knuth (1) SEWilco (27983) | more than 4 years ago | (#30617192) Don't listen to this crap (2) MacGyver2210 (1053110) | more than 4 years ago | (#30617218) I stopped reading at this point, because the author is clearly a terrible programmer and not someone I want to take coding/commenting advice from. If you do comments right, which this guy clearly doesn't, they can save your ass repeatedly and avoid having to go to deeper, harder-to-reach documentation to make it work. Clearly not someone with any real programming experience behind this 'article' (although it looks more like a twitter feed).
http://beta.slashdot.org/story/129290
CC-MAIN-2014-42
refinedweb
4,320
70.33
Back to: Python Tutorials For Beginners and Professionals Input and Output in Python In this article, I am going to discuss Input and Output in Python with examples. Please read our previous article where we discussed Operators in Python with examples. At the end of this article, you will understand the following pointers in detail. - Why should we learn about input and output? - Input and output - Convert from string to type into other type - eval() function - Command line arguments - IndexError - len() function - Multiple Programs to understand above concepts Why the Input and Output chapter? In the previous chapters, all the coding examples were having values hard coded within the code itself. Let’s have a look at the example below which checks if the person is in their teenage or not. Example: Hard coded values to variable age = 16 if age >= 13 and age <= 19: print("Teenage") Output: Teenage In the above program, we have hard coded the age value as 16. If we want to check for other ages then we need to open the file, change the value of age and then execute it again. This is not a good practice. In real time passing the values to the variables during runtime or dynamically is a good practice. So, it’s very necessary and important for us to know about all the functions or methods available in python to take the input during runtime. INPUT and OUTPUT in Python A predefined function input() is available in python to take input from keyboard during runtime. This function takes a value from the keyboard and returns it as a string type. Based on requirement we can convert from string to other types. Example: input() method name = input("Enter the name: ") print("You entered name as: ", name) Output: Example: Checking return type of input() method name = input("Enter the name: ") age = input("Enter the age: ") print("You entered name as: ", name) print("You entered age as:", age) print(type(age)) Output: In the above example, the type of age is string but not integer. If we want to use it as integer then we need to do type conversion on it and then use. Let’s understand that through our teenager program. Example: Checking return type of input() method age = input("Enter the age: ") if int(age) >= 13 and int(age) <= 19: print("Teenage") Output: Convert to other data types from string We can convert the string to other data types using some inbuilt functions. We shall discuss all the type conversion types in the later chapters. As far as this chapter is concerned, it’s good to know the below conversion functions. string to int – int() function string to float – float() function Example: Converting string type to int() age = input("Enter your age: ") print("Your age is: ", age) print("age type is: ", type(age)) x = int(age) print("After converting from string to int your age is: ", x) print("now age type is: ", type(x)) Output: Example: Converting string type to float() salary = input("Enter your salary: ") print("Your salary is: ", salary) print("salary type is: ", type(salary)) x = float(salary) print("After converting from string to float your salary is: ", x) print("now salary type is: ", type(x)) Output: Example: Taking int value at run time age = int(input("Enter your age: ")) print("Your age is: ", age) print("age type is: ",type(age)) Output: Example: Taking int value and print their sum x = int(input("Enter first number: ")) y = int(input("Enter second number: ")) print("Sum of two values: ", x+y) Output: eval() function in python This is an in-built function available in python, which takes the strings as an input. The strings which we pass to it should, generally, be expressions. The eval() function takes the expression in the form of string and evaluates it and returns the result. Examples, - eval(’10 + 10′) → 20 - eval(’10 * 10′) → 100 - eval(’10 and 10′) → 10 - eval(’10 or 10′) → 10 - eval(‘0 and 10’) → 0 - eval(’10 or 0′) → 10 - eval(’10 / 10′) → 1.0 - eval(’10 // 10′) → 1 - eval(’10 >= 10′) → True Example: eval() function examples sum = eval(input("Enter expression: ")) print(sum) Output: Example: eval() function examples output = eval(input("Enter expression: ")) print(output) Output: Command Line Arguments in Python: The command which we are using for running the python file for all the examples in this tutorial is “python filename.py”. This command we are executing thorough is command prompt/terminal. The script is invoked with this command and the execution starts. While invoking the script we can pass the arguments to it, which are called command line arguments. Arguments are nothing but parameters or values to the variables. The command with the arguments will look like, python filename.py 10 20 Our command is ‘python’ and the ‘filename.py’ ‘10’ ‘20’ are arguments to it. In order to use these arguments in our script/code we should do the following import from sys import argv After including the above import statement in our file, the arguments which we pass from the command are available in attribute called ‘argv’. argv will be a list of elements which can be accessed using indexes. - argv[0] will always be the filename.py - argv[1] will be 10 in this case - argv[2] will be 20 in this case Example: Command line arguments in python demo11.py from sys import argv print(argv[0]) print(argv[1]) print(argv[2]) Command: python demo11.py 30 40 Output: IndexError in Python: If we are passing 2 elements as arguments to the script (excluding the default argument filename), then the script has three arguments passed to it. But in the script, if we are trying to access the 10th argument with argv[9], then we get an IndexError. We can understand it from demo12.py Example: Command line arguments demo12.py from sys import argv print(argv[10]) Command: python demo12.py 30 40 Output: Note: Command line arguments are taken as strings elements, which means argv[0], argv[1] etc.. returns string type. (‘argv’ is a list as mentioned above and each individual element in it will be string). Based on requirement we can convert from string type to another type. Example: Command line arguments demo13.py from sys import argv first_name = argv[1] last_name=argv[2] print("First name is: ", first_name) print("Last name is: ", last_name) print("Type of first name is", type(first_name)) print("Type of last name is", type(last_name)) Command: python demo13.py Ram Pothineni Output: Example: Command line arguments demo14.py from sys import argv item1_cost = argv[1] item2_cost =argv[2] print("First item cost is: ", item1_cost) print("Second item cost is: ", item2_cost) print("Type of item1_cost is : ", type(item1_cost)) print("Type of item2_cost is : ", type(item2_cost)) total_items_cost= item1_cost + item2_cost print("Total cost is: ", total_items_cost) Command: python demo14.py 111 223 Output: Example: Command line arguments demo15.py from sys import argv item1_cost = argv[1] item2_cost =argv[2] x=int(item1_cost) y=int(item2_cost) print("First item cost is: ", x) print("Second item cost is: ", y) print("Type of item1_cost is : ", type(x)) print("Type of item2_cost is : ", type(y)) total_items_cost= x + y print("Total cost is: ", total_items_cost) Command: python demo15.py 111 223 Output: In the demo14.py example, the requirement is to take the cost of the items and return the sum of it. For adding the two values we have used the ‘+’ operator which adds the two operands if they are integers and combines the two operands if they are strings. Since argv[1] and agrv[2] are taken as strings, the output is addition of two strings ‘111’ and ‘223’ i.e ‘111223’. In the demo15.py, we see that argv[1] and agrv[2] are converted to integer types and then ‘+’ operator is applied on them. Hence the output in demo15.py is 334 i.e 111+223 len() function in python: len() is an in-built function available in python, which can be applied on certain data types like lists, tuples, strings etc in python to know how many elements are present in it. Since, ‘argv’ is a list we can apply this len() function on it to know how many arguments have been passed. Example: Command line arguments with len function in python demo16.py from sys import argv print("The length of values :", len(argv)) Command: python demo16.py 10 20 30 Output: Note: By default, space is the separator between two arguments while passing. If we want space in the argument itself, then we need to pass that argument enclosed in double quoted( not in single quotes) Example: Command line arguments demo17.py from sys import argv print(argv[1]) Command: python demo17.py hello good morning Output: hello Example: Command line arguments demo18.py from sys import argv print(argv[1]) Command: python demo18.py “hello good morning” Output: hello good morning In the next article, I am going to discuss Control Flow Statements in Python. Here, in this article, I try to explain Input and Output in Python. I hope you enjoy this Input and Output in Python article. I would like to have your feedback. Please post your feedback, question, or comments about this article.
https://dotnettutorials.net/lesson/input-and-output-in-python/
CC-MAIN-2020-29
refinedweb
1,542
61.06
Ticker Ticker class hierarchy Use the Ticker interface to set up a recurring interrupt; it calls a function repeatedly and at a specified rate. You can create any number of Ticker objects, allowing multiple outstanding interrupts at the same time. The function can be a static function, a member function of a particular object or a Callback object. Warnings and notes No blocking code in ISR: avoid any call to wait, infinite while loop or blocking calls in general. No printf, malloc or new in ISR: avoid any call to bulky library functions. In particular, certain library functions (such as printf, malloc and new) are not re-entrant, and their behavior could be corrupted when called from an ISR. While an event is attached to a Ticker, deep sleep is blocked to maintain accurate timing. If you don't need microsecond precision, consider using the LowPowerTicker class instead because that does not block deep sleep mode. Ticker class reference Ticker hello, world Try this program to set up a Ticker to repeatedly invert an LED: /*" Ticker flipper; DigitalOut led1(LED1); DigitalOut led2(LED2); void flip() { led2 = !led2; } int main() { led2 = 1; flipper.attach(&flip, 2.0); // the address of the function to be attached (flip) and the interval (2 seconds) // spin in a main loop. flipper will interrupt it to call flip while(1) { led1 = !led1; wait(0.2); } } Ticker examples Use this example to attach a member function to a ticker: #include "mbed.h" // A class for flip()-ing a DigitalOut class Flipper { public: Flipper(PinName pin) : _pin(pin) { _pin = 0; } void flip() { _pin = !_pin; } private: DigitalOut _pin; }; DigitalOut led1(LED1); Flipper f(LED2); Ticker t; int main() { // the address of the object, member function, and interval t.attach(callback(&f, &Flipper::flip), 2.0); // spin in a main loop. flipper will interrupt it to call flip while(1) { led1 = !led1; wait(0.2); } }
https://os.mbed.com/docs/latest/apis/ticker.html
CC-MAIN-2018-47
refinedweb
315
63.49
Install and create a custom build of Openlayers library for Ember app. This is an Ember CLI addon for managing a custom build of Openlayers library. In order to install the addon just execute in the root of your Ember app: ember install ember-cli-openlayers-builder This will install the addon itself and the Openlayers library to your application. Also the .ol-build config file would appear in the root directory of your app. After that just run the app in a regular way ( ember serve) and it will also build Openlayers according to the config file. You can configure Openlayers builder by modifying .ol-build config file. In general it extends default openlayers build config. It has only one own property - extend: true which defines if the config should be extended from the default one or it is written from scratch and should not fulfill sections that are not defined by the ones from default config. But in most cases you would want to just define your own exports section providing only the namespaces you use in your app. This allows to minimize the library size. After next application rebuild the configuration would be applied and Openlayers would rebuild. Openlayers is available in global namespace so you can access it everywhere in the app via ol.*; Addon provides some useful extra features that may simplify debugging. To manually rebuild Openlayers use ember g openlayers-build This would rebuild the library right away. You can also provide a directory in which to store the output file. This could be helpful if you'd like to see the compiled library: ember g openlayers-build /path/to/output/dir/ If the compilation result is not what you expected, perhaps something went wrong during configuration files merging. In order to see the final config that is used in Openlayers compiler just execute the following: ember g openlayers-build-config This will dump merged config file to console. If you would prefer to save a config to a file, just provide its path as an argument: ember g openlayers-build-config /path/to/config/file.json
https://www.npmjs.com/package/ember-cli-openlayers-builder
CC-MAIN-2017-04
refinedweb
353
53.61
ioeric added a comment. Thanks for the review! Advertising ================ Comment at: clangd/index/Index.h:268 + virtual bool + getSymbol(const SymbolID &ID, + llvm::function_ref<void(const Symbol &)> Callback) const = 0; ---------------- sammccall wrote: > sammccall wrote: > > sammccall wrote: > > > Can we make this a bulk operation (take an arrayref<SymbolID> or similar?) > > > > > > There are use cases like augmenting sema-returned results with info from > > > the index where we want a bunch at once. In practice a single bulk > > > operation will be much nicer for an rpc-based index to implement than a > > > single lookup issued many times in parallel. > > > > > > (The callback interface is really nice here, because the underlying RPC > > > can be streaming!) > > For extensibility and uniformity with FuzzyFind, we should consider adding > > a struct around the parameters. > > > > At least one option seems likely to be added here: retrieving the full > > ("detail") symbol vs the basic symbol (particularly for bulk actions). > > Others are less obvious, but could include something like "chase pointers" > > so that if returning a typedef, the target of the typedef would also be > > looked up and returned. > > > `getSymbol` isn't a bad name, but it's a bit hard to talk about without > ambiguity because "get" is so overloaded and everything deals with > "symbols".. (e.g. "this method gets a symbol as a parameter..."). It's also > awkward to use as a noun, which is common with RPCs. > > `lookup` or `fetch` would be specific enough to avoid this. (Dropping > "symbol" from the method name because it's present in the interface name). > WDYT? Makes sense. I wasn't really sure about `getSymbol` and wanted your thought. Going with `lookup`. ================ Comment at: unittests/clangd/IndexTests.cpp:93 +std::string getQualifiedName(const Symbol &Sym) { + return (Sym.Scope + (Sym.Scope.empty() ? "" : "::") + Sym.Name).str(); +} ---------------- sammccall wrote: > Symbol;Scope is already e.g. "ns::", this shouldn't be needed. This wasn't true in the tests. Fixed. ================ Comment at: unittests/clangd/IndexTests.cpp:112 + std::string Res = ""; + I.getSymbol(ID, [&](const Symbol &Sym) { Res = getQualifiedName(Sym); }); + return Res; ---------------- sammccall wrote: > check the return value This was intended. If no symbol was found, `Res` would be empty by default. No longer relevant in the new revision. Repository: rCTE Clang Tools Extra _______________________________________________ cfe-commits mailing list cfe-commits@lists.llvm.org
https://www.mail-archive.com/cfe-commits@lists.llvm.org/msg85711.html
CC-MAIN-2018-30
refinedweb
376
59.5
Function ecs_ensure_id Synopsis #include <include/flecs.h> FLECS_API void ecs_ensure_id(ecs_world_t *world, ecs_id_t id) Description Same as ecs_ensure, but for (component) ids. An id can be an entity or pair, and can contain id flags. This operation ensures that the entity (or entities, for a pair) are alive. When this operation is successful it guarantees that the provided id can be used in operations that accept an id. Since entities in a pair do not encode their generation ids, this operation will not fail when an entity with non-zero generation count already exists in the world. This is different from ecs_ensure, which will fail if attempted with an id that has generation 0 and an entity with a non-zero generation is currently alive. - Parameters world- The world. id- The id to make alive. Source Line 2233 in include/flecs.h.
https://flecs.docsforge.com/master/api-c/ecs_ensure_id/
CC-MAIN-2022-40
refinedweb
142
57.77
J is hot and pn=.45 when he is not hot. Suppose Lisa can perfectly detect when he is hot, and when he is not. If Lisa predicts based on her perfect ability to detect when Bob is hot, what correlation would you expect? With that setup, I could only assume the correlation would be low. I did the simulation: > n <- 10000 > bob_probability <- rep(c(.55,.45),c(.13,.87)*n) > lisa_guess <- round(bob_probability) > bob_outcome <- rbinom(n,1,bob_probability) > cor(lisa_guess, bob_outcome) [1] 0.06 Of course, in this case I didn’t even need to compute lisa_guess as it’s 100% correlated with bob_probability. This is a great story, somewhat reminiscent of the famous R-squared = .01 example. P.S. This happens to be closely related to the measurement error/attenuation bias issues that Miller told me about a couple years ago. And Jordan Ellenberg in comments points to a paper from Kevin Korb and Michael Stillwell, apparently from 2002, entitled “The Story of The Hot Hand: Powerful Myth or Powerless Critique,” that discusses related issues in more detail. The point is counterintuitive (or, at least, counter to the intuitions of Gilovich, Vallone, Tversky, and a few zillion other people, including me before Josh Miller stepped into my office that day a couple years ago) and yet so simple to demonstrate. That’s cool. Just to be clear, right here my point is not the small-sample bias of the lagged hot-hand estimate (the now-familar point that there can be a real hot hand but it could appear as zero using GIlovich et al.’s procedure) but rather the attenuation of the estimate: the less-familiar point that even a large hot hand effect will show up as something tiny when estimated using 0/1 data. As Korb and Stillwell put it, “binomial data are relatively impoverished.” This finding (which is mathematically obvious, once you see it, and can demonstrated in 5 lines of code) is related to other obvious-but-not-so-well-known examples of discrete data being inherently noisy. One example is the R-squared=.01 problem linked to at the end of the above post, and yet another is the beauty-and-sex-ratio problem, where a researcher published paper after paper of what was essentially pure noise, in part because he did not seem to realize how little information was contained in binary data. Again, none of this was a secret. The problem was sitting in open sight, and people have been writing about this statistical power issue forever. Here, for example, is a footnote from one of Miller and Sanjurjo’s papers: Funny how it took this long for it to become common knowledge. Almost. P.P.S. I just noticed another quote from Korb and Stillwell (2002): Kahneman and Tversky themselves, the intellectual progenitors of the Hot Hand study, denounced the neglect of power in null hypothesis significance testing, as a manifestation of a superstitious belief in the “Law of Small Numbers”. Notwithstanding all of that, Gilovich et al. base their conclusion that the hot hand phenomenon is illusory squarely upon a battery of significance tests, having conducted no power analysis whatsoever! This is perhaps the ultimate illustration of the intellectual grip of the significance test over the practice of experimental psychology. I agree with the general sense of this rant, but I’d add that, at least informally, I think Gilovich et al., and their followers, came to their conclusion not just based on non-rejection of significance tests but also based on the low value of their point estimates. Hence the relevance of the issue discussed in my post above, regarding attenuation of estimates. It’s not just that Gilovich et al. found no statistically significant differences, it’s also that their estimates were biased in a negative direction (that was the key point of Miller and Sanjurjo) and pulled toward zero (the point being made above). Put all that together and it looked to Gilovich et al. like strong evidence for a null, or essentially null, effect. P.P.P.S. Miller and Sanjurjo update: A Visible Hand? Betting on the Hot Hand in Gilovich, Vallone, and Tversky (1985). I did a simulation like this when I was writing about the hot hand, with similar results. Korb and Stillwell did a similar test here: Jordan: Thanks for the link which was indeed relevant. See P.S. added above. Andrew, your P.S. is incorrect. Korb & Stillwell have some nice coverage of the statistical power issues, but they do not perform this calculation. The example you discuss in this post was one we made in reference to Gilovich, Vallone and Tverky’s betting task, p. 308-309 and Table 6 on p. 310, here. We created a version of this example, and re-analyzed of GVT’s betting data in p. 24-25 our paper (here, current version:November 15, 2016). Korb & Stillwell, do not correlate predictions of outcomes with outcomes, or running a hidden markov chain, with a predictor. Andrew- thanks for the update on the P.S.. last thing: the power issues which Korb & Stillwell cover, is different from our point here. Our calculation is about how to evaluate beliefs/prediction/betting data, i.e. it is about the hot hand fallacy, not the hot hand effect. Of course our little example is related to the measurement error story you discuss, and we actually yoked this example to one our simulation studies of measurement error (and power) from the appendix of our 2014 “Cold Shower” paper, where we used a hidden markov model data generating process. (The first discussion of measurement error and the hot hand, using a autocorrelation as the example, was in Dan Stone’s 2012 AmStat paper). Update… we have a new short paper describing this issue in more detail, with a more complete set of simulations to show how the correlation depends on how big is the hot hand, its frequency, and the predictor’s ability. In particular we write: The reason why this correlation measure is expected to lead to a surprisingly low underestimate of prediction ability is closely related to Stone (2012)‘s work on measurement error and the hot hand. And we have a footnote: In particular, Stone (2012) showed that the serial correlation in hits is expected to be far lower than the serial correlation in the player’s hit probability, as a hit is only a noisy measure of a player’s underlying probability of success. In this case we have shown that the correlation between “bet hit'” and “hit” is expected to be far lower than the correlation between “bet hit” and the player’s hit probability, for the same reason. thanks Jordan. We were aware of Korb and Stillwell (and your book), but somehow missed this. Jordan Just looked at Korb & Stillwell quickly and it looks like they are making the power point (not to be confused with MS), and the additional point that binomial data is typically underpowered. I do not see them correlating predictions of outcomes with outcomes, or running a hidden markov chain, with a predictions. This is an obviously true observation. But how valuable is to dissect 30-year-old statistical errors, when there is a mountain of more recent evidence suggesting the hot hand effect is too small to have any practical effect or usefully inform strategic decisions in sports? In the end, we’re still left with a vast gap between the popular perception of the hot hand (large) and its reality (small). Perhaps “fallacy” is too strong a word, but belief in the hot hand is certainly a significant cognitive error. Put it this way: a sports decision-maker would make far, far better decisions if they dismissed the hot hand as non-existent than if they accepted the popular belief in its size and importance. I fear that this suddenly fashionable meme — “there really is a hot hand after all!” — will have the net effect of reducing rather than enhancing public understanding. Guy: You write, “here is a mountain of more recent evidence suggesting the hot hand effect is too small to have any practical effect or usefully inform strategic decisions in sports . . . a sports decision-maker would make far, far better decisions if they dismissed the hot hand as non-existent than if they accepted the popular belief in its size and importance.” I’m not quite sure it makes sense to speak of “the popular belief in its size and importance” given that these beliefs vary; indeed one popular belief is that there is no hot hand at all! I agree with your larger point that what is relevant is the magnitude of the phenomenon and how it works, rather than its mere existence or nonexistence. The point of the above post is that you’re in for all sorts of trouble if you try to estimate this magnitude using sequences of 1’s and 0’s. Naive question: So I guess I don’t understand the “hot hands” question in the first place: Obviously, no one is arguing that a basketball player is a random number generator, right? So, obviously some days I play better, other days I don’t. If we declare the hot-hands theory disproven, does that mean we can never predict the outcome of a player’s next shot with anything more than strictly random accuracy? What’s a compact, canonical statement of the hot hands hypothesis? Rahul: Korb and Stillwell write, “It is not entirely clear what the Hot Hand phenomenon is generally supposed to be, nor just what GVT intend us to understand by it.” Roughly, I think the claim made by Gilovich et al. and their followers was not that players are random number generators, but that they are statistically indistinguishable from random number generators, so that belief in any evidence for the hot hand was a fallacy. So why do we spend so much time on analyzing a problem that’s not even well defined? My feeling is, we keep going in circles: The effect is tiny to start with, & every time one analyst makes some progress the other side shifts the goalposts ever so slightly. And we keep arguing endlessly. Isn’t this a fools errand? Rahul: I agree that if the goal here were to determine a winner or loser of the hot hand debate, this would be largely a waste of time. But that’s not my goal. There’s no winner or loser here. The “game” is to better understand how to use statistical analysis to learn things about the world, and the study of these sorts of simple yet real examples can be, for me and others, a good way to gain intuition and develop new principles. God is in every leaf of every tree. If one must study the frivolous, at least study the *well-defined* frivolous. :) It is. The same argument for it being’ a fools errand could be made for other (possible) effects such as ESP and Power Pose. There are several versions of the hot hand hypothesis. The strongest version is that you don’t actually have a different p from day to day. You have one underlying p and your varying performance from day to day is just the effects of the underlying p. You are correct, though — I don’t anybody who believes the strong version. The weaker version is simply that p(shot made|previous shot made)>p(shot made|previous shot missed). Fancier versions can adjust with lots of other independent variables p(shot made|previous shot made,X=X1)>p(shot made|previous shot made,X=X2) where X is some vector of independent variables at the time of the shot whose effects are previous-shot-independent. Thanks. The strong version seems axiomatically stupid: Isn’t that like saying injuries etc. just don’t exist. About the weak version: The first formulation seems nice p(shot made|previous shot made)>p(shot made|previous shot missed). This ought to be eminently testable. The part that makes this “hot hands” debate farcical is the adjustment part. Obviously there cannot be universal objective agreement on a canonical X vector. When the effect is small to start with, isn’t any “conclusion” of the hot hands problem merely a specific statement of what you think subjectively is the “right” X vector? I love your formalization: p(shot made|previous shot made)>p(shot made|previous shot missed) So what’s the empirical answer to this question, unadjusted for anything. Anyone know? I’m sure there’s a lot more data now, but the original Gilovich paper has the original data in Table 1. p(shot made|previous shot miss)=.54 p(shot made|previous shot made)=.52 This data is for 9 players in 1980-81. A variant of the hot hand hypothesis is that p(shot made|n previous shots made)>p(shot made|previous shot missed) It is this variant that Gilovich tests that is shown to be calculated in a biased fashion in Miller et al. Thanks! The other big issue is going to be external validity. e.g. Does this finding generalize to 1990-91. Et cetra. Who knows!? Frankly, I think it’s a silly problem to study. When the system you are studying itself is so highly variable it is folly to try to answer questions depending on such small differences as 0.54 vs 0.52. Rahul: Grrrr…. No, the probabilities are not 0.54 vs. 0.52. They can vary by much more than that! The point is that conditioning on the previous shot is an extremely noisy way to measure anything. The result is to make the apparent differences look small and inconsequential, even if the underlying differences are huge. Andrew: OK. What’s your preferred statement of the problem? Sure P(make|miss) and P(make|make) are really noisy. But the hot hand would have to include that at a minimum, wouldn’t it? P(make|miss),P(make|hit),P(make|2 hits),P(make|3 hits) would be an increasing series under the hot hand hypothesis, and you can’t even pass step one (which has more data than any other conditional series) what chance do you have later? If nothing else, wouldn’t the set of players with hot hands (assuming there were any such things) be MUCH more likely to have P(hit|make)>P(hit|miss)? Obviously figuring out whether or not there are any such players is the essence of the exercise, of course, but wouldn’t you be really suspicious of spurious fitting if you found that the hot hand only starts after you’d hit 4 in a row and that that fifth shot was really really likely, but that the first make was negatively correlated the the second? Jonathan: Regarding your statement, “you can’t even pass step one,” please look in the above post and in the link in the P.P.S. to get a sense of why it does not make sense to take this apparent failure as evidence that there’s no hot hand, or even as evidence that any hot hand phenomenon is small. What we’ve learned from looking at this problem is that even a large hot hand phenomenon can fail to show up when studied in this crude manner. OK… I read the Korb and Stillwell paper and frankly, the result is completely inappropriate to the serial correlation analysis. What they test is a case where, in a sequence of 10 shots, the probability goes up only after the fifth shat and says there. OF COURSE, that will be an incredibly weak test of whether making the previous shot increases the chance of making the next one: 9/10ths of the time it doesn’t! The fact that the serial correlation test gets nearly twice the “significant” results as the null suggests that this test is really quite good! I’m going to make a simulation that I think will show this all much better…. Rahul There are different levels of discussion. If you want to stick with the literature, the consensus view was the “cognitive illusion” view. The original paper concluded that shots were as-if randomly generated (as Andrew said), and this was the accepted consensus, e.g. here, and this is how papers cited this result, e.g. the first two paragraphs here. Researchers were committing a fallacy here. For your compact canonical statement, if we again want to tie our hands to this literature, well the original paper discussed hot hand (and streak shooting) in terms of patterns, rather than in terms of process. In that paper the hot hand is when the length and frequency of streaks are greater than one would expect by chance (see here, p. 296-297), or probability of success increases after recent success. If you look at how lay players and coaches discuss it, they use the words “zone”, “rhythm”, and “flow”, so clearly they are talking about the probability of success increasing, for whatever reason. But isn’t this vulnerable to the same sort of “p-hacking” that Andrew criticizes? i.e. First we look at the previous shot. Not much to see so we then switch to screening n-previous shots. Or tweak the streak definition. Having dont this, we can try to control for some vector X. Try other variations of X. Next comes the huge degree of freedom offered by which year & which players to analyze. Et cetra. Sure you’ll discover some zone or rhythm somewhere. Now is this a phenomenon or an artifact? Rahul- on p-hacking, there are a few ways to know that it is not: 1. The 3-back measure is the one all previous studies use, and what fans define as a “streak,” so it isn’t chosen out of thin air (its significant with 2-back and 4-back, but too sparse with 5+ back). 2. The 3-back measure works out-of-sample to all the data we collected: a. 3 point shooting contest, b. our own controlled shooting study, and c. a little know study from the late 70s 3. One can control for the multiple comparisons issues involved with using other measures (length of longest streak, frequency of streaks) by constructing composite measures. Agreed that there are many different beliefs about the existence/size of the hot hand. My sense just from talking to sports fans, and listening to almost any game broadcast, is that many — perhaps even most — fans believe in a strong hot hand effect. To be more concrete, I think they believe that a 50% NBA shooter becomes something like a 60% or 70% shooter when “hot,” and that a .280 hitter becomes a .350 or better hitter when hot. And my sense (but I offer this tentatively, as I don’t know this literature well) is that academic research on *belief* in the hot hand are consistent with my intuition. But if the research shows a much more limited public belief in hot hand effects, I will happily revised my view. Guy- We were thinking more about the beliefs of players and coaches, i.e. the decision makers. Right or wrong, we were less concerned about the “public understanding” as you mention. Fans say all sorts of things, its not even clear what they mean. For example, people don’t know the difference between a 60% chance of rain and an 80% chance of rain—in either case, if it doesn’t rain, the weather forecaster is wrong. For another example, look how people were interpreting betting odds in the previous election. Guy- If you define hot hand as the statistical estimate of the increase in field goal percentage after recent success for the *average* player in an unbalanced panel with partial controls, then, yes, if someone were to believe this effect to be large, they would be making a significant cognitive error. But in order to test if peoples beliefs are wrong, you have to first be clear on what their beliefs pertain to. Further, in all controlled (and semi-controlled) tests, the increase in field goal percentage after a recent streak of success is meaningfully large, 5-13 percentage points depending on the study (with the highest estimate in the original study). If one defines the hot hand and the change in a players probability of success when in the hot state, or more realistically, with continuous states, the range over which a player’s probability of success varies (controlling for difficulty), the “mountain of more recent evidence” can’t say much (I assume we are referring to the same mountain). We should have a little humility about what we can measure here. Where the data cannot speak, should we not defer to the practitioners? What you bring up in your final two sentences may be on point though. If players & coaches had to choose between two blanket dumb heuristics of (a) always believe on in the hot hand, or (b) never believe in the hot hand, then the latter could may be safer bet, depending on the strength of the typical reaction. This ignores, of course, the possibility that the coach player has more granular information than the zero and ones we are looking at, and can respond more judiciously. We agree, the “hot hand exists” vs. the “hot hand doesn’t exist” is a terrible way to think of it, the binary formulation is what leads to blanket heuristic thinking. Josh: While I do believe that ignoring the hot hand would be more accurate than accepting the typical fan’s beliefs, those obviously aren’t the only choices. So the more interesting question (to me) is whether the hot hand is ever large enough to be of any practical significance in sports decision-making? For example, should a team pitch differently to a “hot” hitter (or use a hot hitter as a pinch hitter when he would not otherwise be used)? Or should a team give more shots to a “hot” NBA shooter, and should their opponent assign a superior defensive player to guard that “hot” shooter? To me, this is what it would mean for the hot hand to be “real” (while acknowledging that weaker hot hand effects might exist). I have yet to see any compelling evidence for a hot hand that meets this test. If an NBA player did in fact improve his shooting accuracy by as much as 13 percentage points for a period of time, that would easily meet my test of “sports significance.” Or it would be IF this hotness could be detected in real time. I find it hard to believe that such large short-term changes in true ability actually occur. And it’s even harder to imagine how such talent changes could be measured with sufficient confidence to act on the information in real time. Guy: You say: “I have yet to see any compelling evidence for a hot hand that meets this test.” -For game data, those are tough questions. We know of no evidence for or against using game data. But then again players and coaches make all sorts of decisions in the course of the game that cannot be backed up with data-driven evidence. Shouldn’t we defer to them until we have the data to address it? (assuming they aren’t using blanket one-size-fits-all heuristics) Side note: we do have evidence that bettors in GVT’s study do better predicting than they would if they were to guess randomly (and a dumb hot hand heuristic would also do better). You say: “I find it hard to believe that such large short-term changes in true ability actually occur.” -Well, 13pp was an average FG% effect in GVT’s Cornell study, so I’d bet some players *sometimes* get much more than that for their probability of success, in controlled settings. Also, remember we are talking about probability here, so you aren’t going to pick this up with FG% based on recent shot outcomes, which is all anyone ever measures. This means you can’t use existing data to inform your beliefs on the magnitude. The thing exists. Is it modest in all players? Huge in some players, tiny in most players? Well it could be 40pp big in some players and you would never see it in the data given the way it is currently analyzed. You say: “And it’s even harder to imagine how such talent changes could be measured with sufficient confidence to act on the information in real time.” -proprioception? Where did the .13 and .87 come from? The result depends on the probability that Bob has a hot hand. If he has a hot hand half the time, the correlation peaks at around 0.1, and it declines symmetrically around 0.5. By the way, I found that 10,000 replicates gives very noisy estimates. You really want to run this with 1,000,000 reps or even more. None of this, of course, disputes the qualitative point that even if Lisa always knows exactly when Bob has a hot hand, the correlation between her prediction and his outcome will be pretty low. Clyde: It looks like I was assuming that the player was hot 13% of the time, but I don’t at all remember where that 13% came from! we just picked 13% because that is the fraction of shots that would be preceded by a streak of 3 successes, for a Bernoulli p=.5 shooter. In the paper, we used 15% of the time, with the idea that it should be relatively rare. Joshua, Clyde, in any case, 13%, 15% or even 50% doesn’t drastically affect the main point. Even with p_hot=0.5, a simulation still predicts extremely low correlation: set.seed(512) n <- 10000 bob_probability <- rep(c(.55,.45),c(.5,.5)*n) lisa_guess <- round(bob_probability) bob_outcome <- rbinom(n,1,bob_probability) cor(lisa_guess, bob_outcome) [1] 0.09700044 nice observation Andrea. the reason for this is interesting, and it comes right out of what we do in our paper. Our example, which Andrew quoted (appears on p. 24-25 of present version:November 15, 2016), answers essentially the following question: What if there were zero measurement error? Lisa has no measurement error. Now an implication of what Andrew noted about correlation is that corr(Bob’s state, hit)=corr(Lisa’s prediction, hit). Note that if we set up a least square estimate of hit=a+b*[Bob’s state], then b=cov(bob’s state, hit)/sd(bob’s state)*sd(hit) If bob’s state is hot 50% of the time, sd(bob’s state) is expected to be close to sd(hit), b/c the variance for bernoulli R.V. is p(1-p), and therefore b is expected to be close to corr(Bob’s state, hit). Note that because these variables are binary b=proportion(hit| bob’s state= h)-proportion(hit| bob’s state= n), and E[b]=ph-pn=0.10, given the set up. So the closer sd(bob’s state) is to sd(hit), the closer corr(Bob’s state, hit) is expected to be to E[b]=ph-pn=0.10. So the issue is a correlation of .10 seems small, but really correlation is hard to interpret because it is a dimensionless constant. But in this context of binary R.V.s, correlation can be related to probabilities, i.e. a difference in probabilities. A difference in probability of 10 percentage points is a lot for a basketball shot. too bad I can’t edit. Two clarifications/corrections: 1. Let hot=1 if Bob’s state is h, and 0 otherwise. The equation should be hit=a+b*hot, instead of hit=a+b*[Bob’s state]. So sd(bob’s state) can be thought of as sd(hot). 2. correlation is dimensionless, clearly it is not constant. Why would we want to use the correlation coefficient to assess how binary predictions compare to binary outcomes? Anon: Information is information. In this case, it doesn’t really matter how you make the summary; the point is that there’s not much information there. I agree. When reading this I wondered how it was something I never noticed before, since I like knowing “gotchas” like this. Then I realized it is because I wouldn’t calculate a correlation in this situation (or if I ever did, it was quickly dismissed as unhelpful). +1 It stumped me. Anon- for why we talked about these correlations, see see p. 310 here, the original paper. Another reason: if predictions of success are made almost as often as success outcomes (doesn’t have to be that close), the correlation coefficient is close to an estimate of Pr(success|predict success)-Pr(success|predict failure). Here is the simulation translated to python in case someone finds it useful: “` import numpy as np import scipy import scipy.stats n = 10000 h_n_prob = np.array([.55, .45]) h_n_count = np.array([.13,.87]) * n bob_probability = np.concatenate([np.full(int(h_n_count[i]), h_n_prob[i]) for i in [0, 1]]) lisa_guess = bob_probability.round() bob_outcome = [np.random.binomial(1,p) for p in bob_probability] scipy.stats.pearsonr(lisa_guess, bob_outcome)[0] “` +1 Go fellow Pythonista! As a big simulation fan, I like the fact that your first reaction was to get R to run it 10,000 times rather than start messing about with algebra. One of the things I find interesting here is how this plays out in other areas, where the difficulty of getting evidence against something causes that thing to persist forever. And in these other cases, it’s clearly a lot more consequential. For example: 1) Acupuncture making people feel better based on the accurate placement of needles 2) Tamiflu reducing duration and severity of flu 3) Statins protecting against heart disease 4) Mammograms reducing breast cancer mortality We’ve made a bit of progress on some of these perhaps. But the point is, you have a noisy problem where even if you can predict an underlying issue perfectly, the observed outcomes provide not much information about that fact… and you combine this with a default assumption along the lines of “X works” or “X doesn’t work” and it leads us to decades of confusion, wasted resources, and in the medical case potentially unneeded suffering. For amusement. I was watching Barcelona yesterday and Messi got the ball, put it through the 1st defender’s legs, then tapped it to the left as he hopped around the next defender, then tapped it to the right as he hopped to get around the next defender, then cut it back to the goal around a 4th defender, put a shot on the goalie that couldn’t be held and Suarez slammed the rebound home. My point: these little slices are clearly hot hand moments and Lionel gets into a zone of intelligent, instinctive athletic reaction which only a few players can reach and he does this relatively often, but it’s impossible to say when that will happen, which is one reason why we watch: when will Messi or Neymar or Suarez do something so powerfully beautiful? Or maybe they won’t at all today. Next point: we look at averages and totals and those include within them moments of good and bad play and maybe we go back and remember the high points and some low points – Ralph Branca – but those tend to be only visible in retrospect and our expectations matter a lot because we expect Alabama will crush a Div 2 opponent in football or that Peyton Manning would kill a team that blitzed all the time so in more complex situations though we can say completion/incomplete the context in which that occurs would, as noted, require a host of models matched to each situation and that has additional problems. As in, does Steph shoot better in the 4thQ when the game is within 5 points, but then does it matter if the other team has a guard who can defend the perimeter or whether the defenders played the night before and so on. Life is very complicated. Vinnie Johnson was called the Microwave because he’d heat up fast but watching the games I saw a guy who was brought in as an offensive replacement with the team expectation he’d shoot and that many times he would do that and some of those times he was really good. But other times he’d come in not as an offensive replacement and he wouldn’t shoot or he’d shoot and miss and maybe not shoot three in a row. And it was obvious at times, as a Pistons fan, that the coaches would put Vinnie in to shoot sometimes and would run plays for him to shoot making it look like he was on fire if he hit the first 2 or 3 shots. Sensibly, they’d put him in because they thought those plays would work so maybe that would reflect the coach’s hot hand! The Celtics would intentionally run plays for Robert Parrish early because, as they’ve described in print, they felt he’d put more effort into the game if he shot early. That made it look to fans like Chief was more of an offensive player, but really it was that they ran those few plays for him early and, crucially, we’d remember that he hit those few early jump shots in some games. They’d then run plays for him later in the game when those plays were the best option, which given basketball scoring, might be who was defending, who wasn’t on the floor for the Celtics, etc. I don’t know how anyone could separate actual hot hands from this kind of complexity given simple it went in or it didn’t stats. Transient effects are really interesting. Would Pete Reiser have been a great player if he hadn’t crashed into outfield walls? Would Lady Catherine de Burgh have been a great proficient if she had indeed learned to play? Is Messi better than Ronaldo? In most cases, you can’t even argue that totals matter: Babe Ruth played in a different era, was a pitcher for years, etc. Pelé never played in a European league and I don’t think anyone would argue that Josef Bican is actually the greatest soccer player though he tops the list of goals. Even if we model for competition, the arguments come down to “he hit the ball so incredibly far” or “he did the most spectacular things with a ball”. +1… million I find this truly confusing… but yet somehow it compels me to try. If I understand it all correctly, a simulation of successes following another success suggests that runs of actual baskets that match or exceed a player’s average are less likely to be observed unless something ‘special’, the hot hand, is happening. The comments by Guy and Josh above were really useful, thank you. But, I struggle to understand how the same actor, Bob, in any real world sense can have two probabilities of shooting a basket. My guess is that these are the assumed hot hand and play as normal probabilities which as spectators we observe as a combined total probability. But then I wonder how two separate states exist in the same player…I get stuck on the practicality. So I assume all the cleverness around this is correct and so am left wondering how one identifies a hot hand state in an actual game? And if this can be done, how does one predict the likelihood of a hot-hand run in a game/season/career *retrospectively*? The example in my mind is the Australian cricketer, Donald Bradman, whose stats are just unusually high compared to anyone else. I also found this paper (from a course I did on Sport Analytics) on longest runs interesting (and baffling),. It seems to me that this is the same thing, but without a hot-hand factor operating. Or not? Llewelyn: The model we believe is that the probability of success varies continuously from shot to shot, and that there are times when this probability is higher (when a player is “in the zone”) and times when it is higher. The model with two discrete probabilities is a ridiculous oversimplification that we use just to make the point about the estimate being super-noisy. A simulation using a more realistic model would show the same thing. >>>probability of success varies continuously from shot to shot<<< This sounds intuitively so obvious that I'd be amazed to see data to the contrary. What's the alternative hypothesis? That a human player might be an ultra-consistent random number generator with no patterns, and no ups and downs? Rahul: There is no “alternative hypothesis.” I don’t think that way. The point is to estimate the level of variation and estimate what predicts it. The point of the above post is that such estimation is difficult with 0/1 data, and naive estimates such as employed by GIlovich, Vallone, and Tversky in their celebrated paper are so biased and so noisy as to not be interpretable in the usual ways. In my opinion it’s be much better if didn’t frame this as “Evidence for (or against) the hot hand” but rather started thinking in terms of “How much does a player’s performance vary from day to day and how well can we predict performance?” With regards to the bias found in Miller’s paper – how does this extend to outcomes that are not binary? For example, if I want to know what the expected value of a die is following a roll of 6, will this be biased? Matt: The answer’s right in front of you, on your computer. Do a simulation in R or Python and find out! Fair enough! So doing 100,000 reps of a sequence of N=100 of a random variable X that takes on 1,2,3,4,5,6 with prob=1/6, I get the following: E[X_t|X_t-1=1]=3.52 ; E[X_t|X_t-1=2]=3.515 ; E[X_t|X_t-1=3]=3.503 ; E[X_t|X_t-1=4]=3.492 ; E[X_t|X_t-1=5]=3.484 ; E[X_t|X_t-1=6]=3.473. Same mechanism obviously, but still it’s incredible that that is true. I’m going to this hot-hand stuff for golf data which is why the non-binary outcomes are relevant. Yeah fair enough! With n=100 rep=100,000, X taking on 1,2,3,4,5,6 w prob 1/6 – you get E[X_t|X_t-1 = 6]=3.47, and then steadily increases up to E[X_t|X_t-1=1]=3.53. Cool!
https://statmodeling.stat.columbia.edu/2016/12/19/30759/
CC-MAIN-2019-09
refinedweb
6,537
60.45
>>>>> "David" == David Gilbert <address@hidden> writes: David> Thanks, I missed that somehow. By the way, I'm not using emacs, and the David> original source has some tabs (which I'm sure I read should be avoided David> in Classpath), so how far should I indent each line here? Or since you David> didn't mention it, should I assume that the way I did it is OK? Yeah, the tabs/spaces thing continues to bug folks who don't use Emacs. The way it is supposed to work is that the continuation lines of the expression line up with the expression's start: return (x + y + z); And in case a mailer somewhere trashes that, each "+" lines up under the "x". Tom
http://lists.gnu.org/archive/html/classpath-patches/2005-06/msg00150.html
CC-MAIN-2015-48
refinedweb
123
73.31
Please login Prime Prepinsta Prime Video courses for company/skill based Preparation (Check all courses)Get Prime Video Prime Prepinsta Prime Purchase mock tests for company/skill building (Check all mocks)Get Prime mock Binary Search in C++ What is binary search in C++? Binary search is another searching algorithm in C++. It is also known as half interval search algorithm. It is efficient and fast searching algorithm. If we want to search any element in the list then the only condition required is that the elements in the list must be in sorted order. It works by repeatedly dividing in half the portion of the list that could contain the item, until you’ve narrowed down the possible locations to just one. /> Working of binary search in C++ - The searching algorithm proceed when the list is is in sorted order either ascending or descending. - Now divide the list into two halves. - If the element is at the center stop searching. - Else if the element is smaller then the last element of first half then further we will search in first half. - And if the element is greater than the first element of second half then further we will search in second half. - We will follow step 3 ,4 and 5 until our element is found. Steps to implement Binary Searching with C++ - In the given list, first we will find the middle of the list using formula M=(L+R)/2, where M is the index of middle element, L is the index of leftmost element and R is the index of rightmost element. - If the element to be search is on the middle index then we will terminate the search as our element is found. - Else we will check if the element is smaller then the middle element or greater then the middle element. - If it is smaller then the middle element we will continue our search in first half. - And if it greater then the middle element we will continue our search in second half. - We will repeat all the previous steps until our element is found. /> Algorithm for Binary Search in C++ - while(low<=high) mid=(low+high)/2; - if(a[mid]<search_element) low=mid+1; - else if(a[mid]>search_element) high=mid-1; - If found return index - Else return -1 C++ Program for Binary Search #include <iostream>//Header file. using namespace std; int binarySearch(int[], int, int, int); int main() //Main function. { int arr[10] = {19, 26, 37, 53, 56, 61, 77, 82, 87, 91}; int search_element, location=-1; cout<<"Enter the element which you want to search: "; cin>>search_element; location = binarySearch(arr, 0, 9, search_element); if(location != -1) { cout<<"Search element found at"<<location<<" location"; //Printing the result. } else { cout<<"Search element not present"; } return 0; } int binarySearch(int a[], int left, int right, int search_element) { int middle; if(right >= left) { middle = (left + right)/2; if(a[middle] == search_element) //Checking if elemnet is present at middle. { return middle+1; } else if(a[middle] < search_element) //Checking if elemnet is present in greater half. { return binarySearch(a,middle+1,right,search_element); } else //Checking if elemnet is present in samller half. { return binarySearch(a,left,middle-1,search_element); } } return -1; } Output: Enter the element which you want to search: 53 Search element found at 4 location /> Time Complexity For Binary Search Best O(1) Average O(log n) Worst O(log n) Space Complexity O(1) Average Comparisions Log(N+1) Login/Signup to comment
https://prepinsta.com/cpp-program/binary-search-program/
CC-MAIN-2020-45
refinedweb
578
59.64
What exactly is a unicode string? What's the difference between a regular string and unicode string? What is utf-8? I'm trying to learn Python right now, and I keep hearing this buzzword. What does the code below do? i18n Strings (Unicode) > ustring = u'A unicode \u018e string \xf1' > ustring u'A unicode \u018e string \xf1' ## (unistring from above contains a unicode string) > s = unistring.encode('utf-8') > s 'A unicode \xc6\x8e string \xc3\xb1' ## bytes of utf-8 encoding > t = unicode(s, 'utf-8') ## Convert bytes back to a unicode string > t == unistring ## It's the same as the original, yay! True import codecs f = codecs.open('foo.txt', 'rU', 'utf-8') for line in f: # here line is a *unicode* string This answer is about Python 2. In Python 3, str is a Unicode string. Python's str type is a collection of 8-bit characters. The Latin alphabet can be represented using these 8-bit characters, but symbols such as ±, ♠, Ω and ℑ cannot. Unicode is a standard for working with a wide range of characters. Each symbol has a codepoint (a number), and these codepoints can be encoded (converted to a sequence of bytes) using a variety of encodings. UTF-8 is one such encoding. The low codepoints are encoded using a single byte, and higher codepoints are encoded as sequences of bytes. Python's unicode type is a collection of codepoints. The line ustring = u'A unicode \u018e string \xf1' creates a Unicode string with 20 characters. When the Python interpreter displays the value of ustring, it escapes two of the characters (Ǝ and ñ) because they are not in the standard printable range. The line s = unistring.encode('utf-8') encodes the Unicode string using UTF-8. This converts each codepoint to the appropriate byte or sequence of bytes. The result is a collection of bytes, which is returned as a str. The size of s is 22 bytes, because two of the characters have high codepoints and are encoded as a sequence of two bytes rather than a single byte. When the Python interpreter displays the value of s, it escapes four bytes that are not in the printable range ( \xc6, \x8e, \xc3, and \xb1). The two pairs of bytes are not treated as single characters like before because s is of type str, not unicode. The line t = unicode(s, 'utf-8') does the opposite of encode(). It reconstructs the original codepoints by looking at the bytes of s and parsing byte sequences. The result is a Unicode string. The call to codecs.open() specifies utf-8 as the encoding, which tells Python interpret the content of the file (which is a collection of bytes) as a Unicode string that has been encoded using UTF-8.
https://codedump.io/share/sU9Dx9PSHEv5/1/what-is-a-unicode-string
CC-MAIN-2017-09
refinedweb
467
74.19
C++ Seasoning - Date: September 4, 2013 from 11:00AM to 12:15PM - Day 1 - 002 - Speakers: Sean Parent - 85,149 Views - 35 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Right click “Save as…”Zip! And, indeed, we _need_ Sean to write a book! +1 for the book. Great presentation, have to watch it a couple times more to grasp the last bits of it, can't wait! Definitely he must write down a book with uses cases on applying that wisdom. Great talk! Looking forward to watching it again once it is online. When will the video be put online? yes Great 3 suggestions and the vivid examples, +1 for the book, just can not wait :D +1 for the book. The examples were well thought out and consumable by less-experienced C++ developers like me. +100 for the book And it would be great to see actual (fresh) ASL code @dzyashu - I'm working on updating ASL to C++11, cleaning out a lot of overlap with C++11 and newer Boost, and I'll be moving the license shortly to the Boost license to make it easier to incorporate into Boost. Work is currently happening on github at github.com/stlab in the legacy repo (plan is to migrate pieces out as work completes). Progress is slow, but it is moving. can we have the slide uploaded here ? @SeanParent - Cool! Can't wait to appreciate. By the way, I understand that it takes huge amount of effort and time to write a book, so may be you or Adobe has (or would create) some kind of public blog for C++ developers - that would be the great thing as well. Slides for all the sessions would be nice. It's much easier to take notes that way Amazing talk! The "no raw pointers" inheritance example code is really interesting. Any chance we could have the source code for both the shared and unique version to study? Thanks. In the final Q&A part, did anyone get the answer about copying big object and than moving it ? Sean is telling something like : For big objects, there is an extra move that compiler cannot remove if user defined ? I didn't get this point, has anyone an explanation ? Illuminating! Can we please get the slides /pdf pls the presentation has to be read and read again. No question if Sean writes a book I'll purchase it asap! Which book(s) do you recommend to learn the standard Library algorithms and their many uses? This was by far, one of my most favorite talks of the conference. Seeing a "mess of code" (we've all written code like that) transformed into its final state was a real treat. It was my first time watching Sean speak - he's a great oral teacher. Thank you! @Vandamme:The comment near the end was about passing sink arguments by value. The person commenting noted that passing by value, as opposed passing by rvalue ref and const lvalue ref, may impose an additional move operation. My reply was that yes, passing by rvalue ref can be a win, but it is a combinatoric problem, since you can have multiple sink arguments to a function (common for constructors). The commenter then noted that you can instead pass by universal reference and use enable_if<> to limit to the desired types to avoid the combinatoric problem. This works, but it is complex. IMO, passing by value is sufficient in nearly all cases, if move (which should be a constant time operation proportional to the local area of your object, typically small) shows up on a profile in a critical section then by all means, optimize. Here is what the three cases would look like: The last case is probably a bit wrong - but without spending time with the compiler I'd have a difficult time getting it right. Hopefully that makes it clear why I prefer just pass by value. My favorite talk this time. Perhaps because I work on code base when curly braces are often nested 10 times. After hearing about raw loops I looked at tutorial examples of boost::gil documentation. Because nothing says "loops" like image and speaker is from Adobe. :) It looks like one dimensional convolution is std::transform_accumulate (which does not exist in standard library, neither does accumulating output iterator). Or is there such a thing in standard lib? Two dimensional is a bit trickier you need a view of two dimensional array which provides a one dimensional iterator... And you must do it on every pixel so you need a view iterator. Here is the challenge I have for myself. Apply a two dimensional convolution to two dimensional array without nested raw loops while maximizing standard library usage. Anyone? I posted an updated version of the slides from this talk here: @Sean Thanks a lot for your explanation above and the slides, everything is clear now for me ! @SeanParent: The code samples illustrating the three ways of dealing with permutation problem are very useful and clearly demonstrate the readability advantages of passing by value. There is just one error there: the snippet with enable_if should std::forward its parameters, not std::move. @mstone:Thanks - I edited the comment. Anyone else having trouble using 'gather' function from the slides in vs2012 ? 'not1' refuse to accept the lambda: [](int v) -> bool { return (v % 2) == 0; } Have I forgot how to use c++ or is it another visual studio bug? One of those bad days... @depths:There is an optional slide I didn't show that discusses the issue. It is on page 209 in this slide deck <>. That page also shows an implementation using a lambda instead of not1() to do the predicate negation. I'm hoping that C++14 fixes not1(). In case you have trouble viewing the slide - here is the code: @SeanParent: Question 1: I'm confused. I've gotten several suggestions, each contradicts the previous one. For best performance should I pass by value or universal reference? Is it case by case basis? Even an extra move could impact performance. Question 2: How would you write column based sorting that obeys the previous sort? It sounds like subrange sorting. Sorted by first name: John Smith New York John Doe Ohio John Stone Alaska Micheal Smith New York Micheal Doe Georgia Micheal Stone Alaska Now subrange sorted by state John Stone Alaska John Smith New York John Doe Ohio Micheal Stone Alaska Micheal Doe Georgia Micheal Smith New York @jerry: Answer 1: For best performance pass by universal reference or supply all permutations of r-value/l-value. But profile first - it usually will do no better than pass by value and may end up slower since you have more code polluting your icache. Either approach adds size and complexity to your code - you might be better off investing in other optimizations. Answer 2: Use std::stable_sort() to obey the previous ordering(s). For example, let's say the user clicked on the "state" column - you would sort by state as: The the user clicked on the "first" column - you would sort by first as: The result would be as in your subrange sorted example - but the idea extends to any number of columns. Great talk! In your bad_cow example. Code like: If (0 == --object_m->count_m) delete object_m; Also has data race correct? In between the check and the delete, if unlucky somebody on another thread could make a copy to that bad_cow and bump up the ref count thus you will end up deleting the object while someone else thinks it is still valid. Or does this happen so rarely in practice and code like this is usuallly ok to write? Ugh I meant race condition in my previous comment. Actually no, it should be safe. If the ref count did get 0, nothing else can grab a reference to it and bump the refcount.. Apologies! @Sean: First, thanks for answering questions on here! Very appreciated. You mention that there's a language defect that requires you to write a move assignment operator. Can you elaborate on what exact defect you were referring to? Why would a "unifying assignment"[1] operator not work (where you take the parameter by value to achieve copy and swap for lvalues and move construction into the temporary for rvalues)? (or am I misunderstanding something here?) Thanks! [1]: @Chase:I answered your question in the comment section for my other talk: @Sean: Sorry I missed that; thanks for the explanation! (I, too, want your book!) I love to learn new things... and what you explained was amazing, please go on and teach us how to make things as simpler as they can be. Thanks indeed for the talk, I'm eager to read your book. Amaaaaaziiiing!!!! csr Remove this comment Remove this threadclose
http://channel9.msdn.com/Events/GoingNative/2013/Cpp-Seasoning
CC-MAIN-2014-42
refinedweb
1,498
71.65
#include <mount_point.h> The mount_point class is used to represent a mount point parsed from a source file. Definition at line 28 of file mount_point.h. The destructor. It is not virtual, thou shalt not derive from this class. Definition at line 24 of file mount_point.cc. The constructor. Definition at line 29 of file mount_point.cc. The copy constructor. Definition at line 39 of file mount_point.cc. The default constructor. Do not use. The get_mount_point method may be used to obtain the position withing the file system to mount a file system Definition at line 68 of file mount_point.h. The get_source_location method may be used to obtain the location of the mount point within the text of the source mount map. Definition at line 75 of file mount_point.h. The is_absolute method is used to determine whether or not this mount point is relative (does not start with '/') or absolute (starts with '/'). Definition at line 68 of file mount_point.cc. The addition operator may be used to glue two mount points together, as is sometimes required for complex automap rows. Definition at line 59 of file mount_point.cc. The assignment operator. Definition at line 47 of file mount_point.cc. The locn instance variable is used to remember the location of the mount point within the text of the source mount map. Definition at line 101 of file mount_point.h. The path instance variable is used to remember the position within the file system to mount a file system. Definition at line 95 of file mount_point.h.
http://nis-util.sourceforge.net/doxdoc/classmount__point.html
CC-MAIN-2018-05
refinedweb
258
68.57
Twisted: from: from . In many cases, the protocol only needs to connect to the server once, and the code just wants to get a connected instance of the protocol. In those cases twisted.internet.endpoints provides the appropriate API, and in particular connectProtocol which takes a protocol instance rather than a factory. from twisted.internet import reactor from twisted.internet.protocol import Protocol from twisted.internet.endpoints import TCP4ClientEndpoint, connectProtocol class Greeter(Protocol): def sendMessage(self, msg): self.transport.write("MESSAGE %s\n" % msg) def gotProtocol(p): p.sendMessage("Hello") reactor.callLater(1, p.sendMessage, "This is sent in a second") reactor.callLater(2, p.transport.loseConnection) point = TCP4ClientEndpoint(reactor, "localhost", 1234) d = connectProtocol(point, Greeter()) d.addCallback(gotProtocol) reactor.run() Regardless of the type of client endpoint, the way to set up a new connection is simply pass it to connectProtocol along with a protocol instance. ‘host’ and ‘port’ and constructing its own. For more information on different ways you can make outgoing connections to different types of endpoints, as well as parsing strings into endpoints, see the documentation for the endpoints API . You may come across code using ClientCreator , an older API which is not as flexible as the endpoint API. Rather than calling connect on an endpoint, such code will look like this: from. from: from twisted.internet import reactor reactor.connectTCP(host, port, EchoClientFactory()) reactor.run() Note that clientConnectionFailed is called when a connection could not be established, and that clientConnectionLost is called when a connection was made and then disconnected. IReactorTCP.connectTCP provides support for IPv4 and IPv6 TCP clients. The host argument it accepts can be either a hostname or an IP address literal. In the case of a hostname, the reactor will automatically resolve the name to an IP address before attempting the connection. This means that for a hostname with multiple address records, reconnection attempts may not always go to the same server (see below). It also means that there is name resolution overhead for each connection attempt. If you are creating many short-lived connections (typically around hundreds or thousands per second) then you may want to resolve the hostname to an address first and then pass the address to connectTCP instead. Often, the connection of a client will be lost unintentionally due to network problems. One way to reconnect after a disconnection would be to call connector.connect() when the connection is lost: from : from) The clients so far have been fairly simple. A more complicated example comes with Twisted Words in the doc/words/examples directory. # Copyright (c) Twisted Matrix Laboratories. # See LICENSE for details. """ An example IRC log bot - logs a channel's events to a file. If someone says the bot's name in the channel followed by a ':', e.g. <foo> logbot: hello! the bot will reply: <logbot> foo: I am a log bot Run this script with two arguments, the channel name the bot should connect to, and file to log to, e.g.: $ python ircLogBot.py test test.log will log channel #test to the file 'test.log'. To run the script: $ python ircLogBot.py <channel> <file> """ from __future__ import print_function # successfully.""". from. It does the same thing the example above does using the protocol attribute of the factory to create the protocol instance. In the example above, the factory could be rewritten to look like this: class LogBotFactory(protocol.ClientFactory): protocol = LogBot def __init__(self, channel, filename): self.channel = channel self.filename = filename.
https://twistedmatrix.com/documents/current/core/howto/clients.html
CC-MAIN-2019-18
refinedweb
580
50.53
C++ Programming/Classes< C++ Programming ClassesEditEditEditEdit This label indicates any members within the 'public' section can be accessed freely anywhere a declared object is in scope. privateEditEdit The protected label has a special meaning to inheritance, protected members are accessible in the class that defines them and in classes that inherit from that base class, or friends of it. In the section on inheritance we will see more about it. Inheritance (Derivation)Edit As seen early when introducing the programming paradigms, inheritance is a property that describes a relationship between two (or more) types or classes, of objects. from another. - Base Class A base class is a class that is created with the intention of deriving other classes from it. - Child Class A child class is a class that was derived from another, that will now be the parent class to it. - Parent Class A parent class is the closest class that we derived from to create the one we are referencing as the child class. As an example, suppose you are creating a game, something using different cars, and you need specific type of car for the policemen and another type for the player(s). Both car types share similar properties. The major difference (on this example case) would be that the policemen type would have sirens on top of their cars and the players' cars will not. One way of getting the cars for the policemen and the player ready is to create separate classes for policemen's car and for the player's car like this: class PlayerCar { private: int color; public: void driveAtFullSpeed(int mph){ // code for moving the car ahead } }; class PoliceCar { private: int color; bool sirenOn; // identifies whether the siren is on or not bool inAction; // identifies whether the police is in action (following the player) or not public: bool isInAction(){ return this->inAction; } void driveAtFullSpeed(int mph){ // code for moving the car ahead } }; and then creating separate objects for the two cars like this: PlayerCar player1; PoliceCar policemen1; So, except for one thing that you can easily notice: there are certain parts of code that are very similar (if not exactly the same) in the above two classes. In essence, you have to type in the same code at two different locations! And when you update your code to include methods (functions) for handBrake() and pressHorn(), you'll have to do that in both the classes above. Therefore, to escape this frustrating (and confusing) task of writing the same code at multiple locations in a single project, you use Inheritance. Now that you know what kind of problems Inheritance solves in C++, let us examine how to implement Inheritance in our programs. As its name suggests, Inheritance lets us create new classes which automatically have all the code from existing classes. It means that if there is a class called MyClass, a new class with the name MyNewClass can be created which will have all the code present inside the MyClass class. The following code segment shows it all: class MyClass { protected: int age; public: void sayAge(){ this->age = 20; cout << age; } }; class MyNewClass : public MyClass { }; int main() { MyNewClass *a = new MyNewClass(); a->sayAge(); return 0; } As you can see, using the colon ':' we can inherit a new class out of an existing one. It’s that simple! All the code inside the MyClass class is now available to the MyNewClass class. And if you are intelligent enough, you can already see the advantages it provides. If you are like me (i.e. not too intelligent), you can see the following code segment to know what I mean: class Car { protected: int color; int currentSpeed; int maxSpeed; public: void applyHandBrake(){ this->currentSpeed = 0; } void pressHorn(){ cout << "Teeeeeeeeeeeeent"; // funny noise for a horn } void driveAtFullSpeed(int mph){ // code for moving the car ahead; } }; class PlayerCar : public Car { }; class PoliceCar : public Car { private: bool sirenOn; // identifies whether the siren is on or not bool inAction; // identifies whether the police is in action (following the player) or not public: bool isInAction(){ return this->inAction; } }; In the code above, the two newly created classes PlayerCar and PoliceCar have been inherited from the Car class. Therefore, all the methods and properties (variables) from the Car class are available to the newly created classes for the player's car and the policemen's car. Technically speaking, in C++, the Car class in this case is our "Base Class" since this is the class which the other two classes are based on (or inherit from). Just one more thing to note here is the keyword protected instead of the usual private keyword. That’s no big deal: We use protected when we want to make sure that the variables we define in our base class should be available in the classes that inherit from that base class. If you use private in the class definition of the Car class, you will not be able to inherit those variables inside your inherited classes. There are three types of class inheritance: public, private and protected. We use the keyword public to implement public inheritance. The classes who inherit with the keyword public from a base class, inherit all the public members as public members, the protected data is inherited as protected data and the private data is inherited but it cannot be accessed directly by the class. The following example shows the class Circle that inherits "publicly" from the base class Form: class Form { private: double area; public: int color; double getArea(){ return this->area; } void setArea(double area){ this->area = area; } }; class Circle : public Form { public: double getRatio() { double a; a = getArea(); return sqrt(a / 2 * 3.14); } void setRatio(double diameter) { setArea( pow(diameter * 0.5, 2) * 3.14 ); } bool isDark() { return (color > 10); } }; The new class Circle inherits the attribute area from the base class Form (the attribute area is implicitly an attribute of the class Circle), but it cannot access it directly. It does so through the functions getArea and setArea (that are public in the base class and remain public in the derived class). The color attribute, however, is inherited as a public attribute, and the class can access it directly. The following table indicates how the attributes are inherited in the three different types of inheritance: As the table above shows, protected members are inherited as protected methods in public inheritance. Therefore, we should use the protected label whenever we want to declare a method inaccessible outside the class and not to lose access to it in derived classes. However, losing accessibility can be useful sometimes, because we are encapsulating details in the base class. Let us imagine that we have a class with a very complex method "m" that invokes many auxiliary methods declared as private in the class. If we derive a class from it, we should not bother about those methods because they are inaccessible in the derived class. If a different programmer is in charge of the design of the derived class, allowing access to those methods could be the cause of errors and confusion. So, it is a good idea to avoid the protected label whenever we can have a design with the same result with the private label. Now one more additional "syntax trick". If the base / parent class has a constructor which requires parameters, we are in trouble, you may think. Of course calling constructors directly is forbidden, but we have a special syntax for this purpose. The way, is just so that when you define the constructor of the delivered class, you call the parent constructor like this: ChildClass::ChildClass(int a, int b) : ParentClass(a, b) { //Child constructor here } Multiple inheritanceEdit Multiple inheritance allows the construction of classes that inherit from more than one type or class. This contrasts with single inheritance, where a class will only inherit from one type or class. Multiple inheritance can cause some confusing situations, and is much more complex than single inheritance, so there is some debate over whether or not its benefits outweigh its risks. Multiple inheritance has been a touchy issue for many years, with opponents pointing to its increased complexity and ambiguity in situations such as the "diamond problem". Most modern OOP languages do not allow multiple inheritance. The declared order of derivation is relevant for determining the order of default initialization by constructors and destructors cleanup. class One { // class internals }; class Two { // class internals }; class MultipleInheritance : public One, public Two { // class internals }; Data membersEdit.. static data memberEdit. Member. Subsumption propertyEditEditEditEditEditEdit Singleton classEdit A Singleton class is a class that can only be instantiated once (similar to the use of static variables or functions). It is one of the possible implementations of a creational pattern, which is fully covered in the Design Patterns Section of the book.
https://en.m.wikibooks.org/wiki/C%2B%2B_Programming/Classes
CC-MAIN-2015-40
refinedweb
1,469
54.36
I created a background image bitmap for a view and now the view is being stretched to the size of the background image…. is this normal? <?xml version="1.0" encoding="utf-8"?> <bitmap xmlns: here’s how I apply it to my view v.setBackgroundResource(R.drawable.backgroundgreen); for instance… if the image is 500px in height and the view is 200px in height(being set wrap_content as height) after setting the image as background my view becomes 500px in height… According to me, the problem you are facing is not a problem, it is the way how Android is used to design the layouts. This means that you can set the height and width with 3 default constant values: FILL_PARENT Special value for the height or width requested by a View. FILL_PARENT means that the View wants to be as big as its parent, minus the parent’s padding if any. This value is deprecated starting in API Level 8 and replaced by MATCH_PARENT. MATCH_PARENT Special value for the height or width requested by a View. MATCH_PARENT means that the view wants to be as big as its parent, minus the parent’s padding if any. Introduced in API Level 8. WRAP_CONTENT Special value for the height or width requested by a View. WRAP_CONTENT means that the View wants to be just large enough to fit its own internal content, taking its own padding into account. Now, when you are setting the View‘s height/width to WRAP_CONTENT, you are allowing the view to take that much size that is sufficient to show to view’s content. The background image is also the View‘s content, hence you view will be shown of as much size as the image. That’s not a problem, that’s how it’s shown. Okay, but in your situation that’s an issue for you because you have a background to show and view should not be stretched for that. I can suggest few ways: First and very obvious: make correctly sized images and keep them in different drawable folders. Specify the size of view not using constants, but in DP. If it becomes necessary, make different layout XML files for different sizes and keep them in layout folders. You can use a very useful thing for design layout is layout weight. Answer: I have faced this same problem. If the background image has a size that is bigger than the view’s size, the view’s size will change to match the image’s size. Solution - Put the view inside a Relative Layout. - Remove the background image. - Add an ImageView before the View inside the Relative Layout Set the srcof the ImageView to your background image <RelativeLayout . . . > <ImageView android: <YourView android:id="@+id/yourViewId" . .. ... /> </RelativeLayout> This all can be done in code of course. Answer: I suggest to create a wrapper layout and put the background image in there. i’m using it that way and fits very nicely. see example below <ScrollView xmlns: <!-- your layouts and components goes here --> </ScrollView> … Social Coding @ AspiroTV Answer: It’s working for me. < ?xml @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.prueba); ((LinearLayout)findViewById(R.id.LinearLayoutTest)).setBackgroundResource(R.drawable.greenrepeat); } in your code, what is v? It has params to fill_parent? Answer: The reason is quite simple. You gotta see the View::getSuggestedMinimumWidth/Height method. protected int getSuggestedMinimumWidth() { return (mBackground == null) ? mMinWidth : max(mMinWidth, mBackground.getMinimumWidth()); } protected int getSuggestedMinimumHeight() { return (mBackground == null) ? mMinHeight : max(mMinHeight, mBackground.getMinimumHeight()); } Seeing that, you may know why the background makes a view bigger, especially why assign a BitmapDrawable to it. and the simple solution is to wrap that Drawable (eg. BitmapDrawable), then returns 0 for getMinimumHeight() and getMinimumWidth(), and better to override getIntrinsicHeight() and getIntrinsicWidth() to returns -1. support-v7 has a DrawableWrapper which delegates calls to another drawable when necessary. you can extends that one and override methods talked above. and if you don’t use support-v7 (WoW! you are awesome), copy that class to your project is also fine. Answer: Don’t use android:tileMode="repeat". Is your green drawable bigger or smaller than your view? Could add more details? Answer: One good solution that is working perfectly in my case is extending the View and overriding onMeasure(). Here is the steps to do: - Create an own class and extend the Viewyou want to use, here for example I will use Button. - Override the method onMeasure()and insert the code at the bottom. This will set the background resource after the first measure has been done. For the second measure event, it will use the already measured paramters. Example code for a custom view which extends Button (change Button to the View you would like to extend) public class MyButton extends Button { boolean backGroundSet = false; public MyButton(Context context) { super(context); } public MyButton(Context context, AttributeSet attrs) { super(context, attrs); } public MyButton(Context context, AttributeSet attrs, int defStyleAttr) { super(context, attrs, defStyleAttr); } @Override protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) { if(backGroundSet) { setMeasuredDimension(getMeasuredWidth(), getMeasuredHeight()); return; } super.onMeasure(widthMeasureSpec, heightMeasureSpec); backGroundSet = true; setBackgroundResource(R.drawable.button_back_selector); } } The only thing to change here is the type of view you want to extend and in the onMeasure() method the background resource you want to use for the view. After that, just use this view in your layout xml or add it programatically. Tags: androidandroid, image, view
https://exceptionshub.com/android-setting-a-background-image-to-a-view-stretches-my-view.html
CC-MAIN-2021-10
refinedweb
910
57.27
NAME vga_imageblt - copy a rectangular pixmap from system memory to video memory SYNOPSIS #include <vga.h> void vga_imageblt(void *srcaddr, int destaddr, int w, int h, int pitch); DESCRIPTION Write a rectangular pixmap from system memory to video memory. destaddr is an offset into video memory (up to 2M). The pitch is the logical width of the screen. Height h is in Pixels, Width w is in BYTES! It fills the given box with the data in memory area *srcaddr. The memory buffer must contain the pixels in the same representation as used in the vga memory, starting at the top left corner, from left to right, and then, line by line, from up to down, without any gaps and interline spaces._hlinelist.
http://manpages.ubuntu.com/manpages/precise/man3/vga_imageblt.3.html
CC-MAIN-2017-17
refinedweb
123
62.58
Want more? Here are some additional resources on this topic: The System.Data.Common namespace provides generic classes for working with data. These classes are intended to give developers a way to write ADO.NET code that will work against all .NET Framework data providers. Provides an overview of the factory design pattern and programming interface. Demonstrates how to retrieve information about the .NET Framework data providers installed on the local computer. Demonstrates how to work with factories to access data sources across multiple providers. Introduces features that are new in ADO.NET. Provides an introduction to the design and components of ADO.NET. Describes secure coding practices when using ADO.NET. Describes how to create and use a DataSet, a typed DataSet, a DataTable, and a DataView..
http://msdn.microsoft.com/en-us/library/t9f29wbk(VS.80).aspx
crawl-002
refinedweb
128
61.93
The general form of an immediate function is: (function(){ //...})(); This defines a function as being the instructions between the curly brackets {} and then executes it immediately by ending it with the usual (). In this particular case we have two event handling function defined where we are invited to put our startup code and a final call to a custom Application object's start method. What is the point of doing things this way? The first is that defining variables using: var myVariable; anywhere within the immediate function would make them local to the function.That is as soon as the outer function finishes the variables are destroyed. This is the big advantage of this approach in that you don't pollute the global namespace but now the only namespace you have is local to the function. Suppose you really want to create a variable that is accessible after the immediate function has completed? At this point you might be thinking of creating a global variable. In most JavaScript implementations if you forget to use the var in front. So unless you remove the use strict statement you can't actually define a global variable and removing the use strict isn't a good idea. WinJS also solves the problem of the single local namespace by introducing some facilities to add global objects and properties to the global namespace. We will come back to exactly how this works in a later chapter but all you have to do is use the WinJS.NameSpace object's define method: What really matters is that you realize that any variables you declare within the immediate function are not global and are destroyed before the main part of you JavaScript code ever gets to run. Of course there are exceptions to this rule. Any function or variable that you assign to a property of an existing global object will not vanish after the immediate function completes. This is how the event handlers manage to survive. Some JavaScript programmer look at the structure of a WinJS application and think that its unnecessarily complicated. and I'm sure you will see that it makes good sense to structure your programs in this way. As an example of getting an app running let's place a button on the screen and hook up an event handler to change the button's text to the usual greeting when it is clicked. This is a deliberately simple example. If you want something more complicated you can always take a look at the official documentation's example which, as always is more about impressing you with how powerful the system is rather than getting you started. First we need a button and we might as well use an HTML5 button for the job - you can use HTML Input tag if you want in more or less the same way: <body> <button id="Button1">Click Me</button></body> Put this code in place of the "Content Goes Here" line in the default.html file.. There is also the small matter of making sure that the button has actually been created before this code runs but for an HTML object created by tabs there is no problem. Later we will have to be more careful. If you run the code you will see the button appear and when you click it the message will replace the ClickMe caption. Notice that the button's positioning follows the usual HTML layout rules - css float, tables etc.You can defined styles that are "local" to the application in the default.css file. The WinJS framework has its own style sheet which in the case of this project is ui-dark.css. There are some additional layout controls but these are not HTML5 standard components but part of the WinJS extensions. They work well but if you use them your app isn't going to be portable to a standard web browser - but then your JavaScript is already non-standard. Also notice the use of the addEventListener method to attach the event handler. This is the preferred way of doing the job in HTML5 and it is a mystery why the template uses the more traditional WinJS.Application.on);
http://i-programmer.info/ebooks/creating-javascripthtml5-metro-applications/4045-getting-started-with-metro-javascript.html?start=1
CC-MAIN-2014-42
refinedweb
702
59.13
). There are two main approaches in XML parsing: SAX and DOM. The SAX specification defines an event-based approach where the implemented parsers scan through XML data and they use call-back handlers whenever certain parts of the document have been reached. On the other hand, the DOM specification defines a tree-based approach to navigating an XML document. In a previous tutorial we saw how to parse XML documents using SAX. DOM object is relatively resources intensive and perhaps not suitable for use in a mobile environment. A SAX parser is quite more lightweight and uses a smaller memory footprint. SAX is a push parsing API but the approach it uses is somewhat “broken” in the sense that, rather than being called by the parsing application, the SAXParser uses a message handler with “call backs”. An alternative to that is using a relatively new practice, the “pull parsing” approach. In short, the main difference with this approach is that the user code is in control and can pull more data when it is ready to process it. You can find an excellent article in processing XML with the XML Pull Parser as well as some XML pull parsing patterns. The Android SDK includes support for XML Pull parsing (which surprisingly is there from Level 1 API) via the XML Pull package. The main class used is the XmlPullParser with the Javadoc page including a simple example of how to use the parser. In this tutorial I am going to show you how to add pull parsing capabilities into your Android application and how to implement a more sophisticated parser than the one provided by the API docs. If you are a regular JavaCodeGeeks reader, you probably know that I have started a tutorial series where I am building a full application from scratch. In its third part (“Android Full App, Part 3: Parsing the XML response”), I use an XML based external API in order to perform movies search. A sample XML response is the following: Movies search for “Transformers” and (year) “2007” In that tutorial I presented the SAX based approach, but know we are going to boost things up by using Android’s XML pull parser. First, let’s create a new Android project inside Eclipse. I am calling it “AndroidXmlPullParserProject”. Here is a screenshot of the configuration used: The first step in using the XML Pull API is to obtain a new instance of the XmlPullParserFactory class. This class is used to create implementations of XML Pull Parser defined in XMPULL V1 API. We will disable the namespace awareness of the factory since it is not required by the application’s needs. Note that this will also improve the parsing speed. Next, we create a new XmlPullParser by invoking the newPullParser factory method. An input has to be provided to our parser and that is accomplished via the setInput method, which requires an InputStream and an encoding as arguments. We provide an input stream obtained by a URL connection (since our XML document is an internet resource), but we do not provide an input encoding (null is just fine). XML Pull parsing is event-based and in order to parse the whole document, the trick is to create a loop inside which we serially get all the parsing events until we reach the END_DOCUMENT event. As a showcase example, the code will just print log statements when the following events are encountered: - START_TAG: An XML start tag was read. - TEXT: Text content was read. - END_TAG: An XML end tag was read. - START_DOCUMENT: Parser is at the beginning of the document. - END_DOCUMENT: Logical end of the xml document. Here is the source code for our first simple implementation: package com.javacodegeeks.android.xml.pull; import java.io.InputStream; import java.net.URL; import java.net.URLConnection; import org.xmlpull.v1.XmlPullParser; import org.xmlpull.v1.XmlPullParserFactory; import android.app.Activity; import android.os.Bundle; import android.util.Log; public class XmlPullParserActivity extends Activity { private static final String xmlUrl = ""; private final String TAG = getClass().getSimpleName(); @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); try { parseFromUrl(); } catch (Exception e) { Log.e(TAG, "Error while parsing", e); } } private void parseFromUrl() throws Exception { XmlPullParserFactory factory = XmlPullParserFactory.newInstance(); factory.setNamespaceAware(false); XmlPullParser xpp = factory.newPullParser(); URL url = new URL(xmlUrl); URLConnection ucon = url.openConnection(); InputStream is = ucon.getInputStream(); xpp.setInput(is, null); int eventType = xpp.getEventType(); while (eventType != XmlPullParser.END_DOCUMENT) { if (eventType == XmlPullParser.START_DOCUMENT) { Log.d(TAG, "Start document"); } else if (eventType == XmlPullParser.END_DOCUMENT) { Log.d(TAG, "End document"); } else if (eventType == XmlPullParser.START_TAG) { Log.d(TAG, "Start tag " + xpp.getName()); } else if (eventType == XmlPullParser.END_TAG) { Log.d(TAG, "End tag " + xpp.getName()); } else if (eventType == XmlPullParser.TEXT) { Log.d(TAG, "Text " + xpp.getText()); } eventType = xpp.next(); } } Include the INTERNET permission into your Android Manifest file and launch the project. Go to the DDMS view of Eclipse and create a new filter using the class name “XmlPullParserActivity” as shown in the following image: You should then find the various log messages in the LogCat view: Notice that no special parsing has occurred. We just got notified when the parser found a new tag, reached the document end etc. However, since we are sure that we have the basic infrastructure ready, we can step it up a bit. First, take a look at the sample XML (provided by the TMDb API): It is your typical XML document with nested elements etc. The data that are interesting to us are those inside the “movies” element. We will create a Movie class and map each child element to the corresponding class field. Moreover, we will also create an Image class using the same approach. Note that a Movie can have zero or more Images. Thus, the two domain model classes are: package com.javacodegeeks.android.xml.pull.model; import java.util.ArrayList; public class Movie { public String score; public String popularity; public boolean translated; public boolean adult; public String language; public String originalName; public String name; public String type; public String id; public String imdbId; public String url; public String votes; public String rating; public String certification; public String overview; public String released; public String version; public String lastModifiedAt; public ArrayList<Image> imagesList; } package com.javacodegeeks.android.xml.pull.model; public class Image { public String type; public String url; public String size; public int width; public int height; } We are now ready to start the parsing. We first create the factory and the pull parser the same way as before. Note that the document does not directly start with the “movies” element, but there are a few elements that we wish to skip. That is accomplished by using the methods nextTag (for START_TAG and END_TAG events) and nextText (for TEXT events). Now we are ready to proceed with the interesting parsing. We are going to use a “recursion-like” approach. The “movies” element contains a number of “movie” elements where a “movie” element contains a number of “image” elements. Thus, we “drill-down” from the parent elements to the child ones using a dedicated method for the parsing of each element. From one method to an other, we pass the XmlPullParser instance as argument since there is a unique parser implementing the parsing. The result of each method is an instance of the model class and finally a list of movies. In order to check the name of the current element, we use the getName method and in order to retrieve the enclosed text we use the nextText method. For attributes, we use the getAttributeValue method where the first argument is the namespace (null in our case) and the second is the attribute name. Enough talking, let’s see how all these are translated to code: package com.javacodegeeks.android.xml.pull; import java.io.IOException; import java.io.InputStream; import java.net.URL; import java.net.URLConnection; import java.util.ArrayList; import java.util.LinkedList; import java.util.List; import org.xmlpull.v1.XmlPullParser; import org.xmlpull.v1.XmlPullParserException; import org.xmlpull.v1.XmlPullParserFactory; import android.app.Activity; import android.os.Bundle; import android.util.Log; import com.javacodegeeks.android.xml.pull.model.Image; import com.javacodegeeks.android.xml.pull.model.Movie; public class XmlPullParserActivity extends Activity { private static final String xmlUrl = ""; private final String TAG = getClass().getSimpleName(); @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); try { List<Movie> movies = parseFromUrl(); for (Movie movie : movies) { Log.d(TAG, "Movie:"+movie); } } catch (Exception e) { Log.e(TAG, "Error while parsing", e); } } private List<Movie> parseFromUrl() throws XmlPullParserException, IOException { List<Movie> moviesList = null; XmlPullParserFactory factory = XmlPullParserFactory.newInstance(); factory.setNamespaceAware(false); XmlPullParser parser = factory.newPullParser(); URL url = new URL(xmlUrl); URLConnection ucon = url.openConnection(); InputStream is = ucon.getInputStream(); parser.setInput(is, null); parser.nextTag(); parser.nextTag(); parser.nextTag(); parser.nextTag(); parser.nextText(); parser.nextTag(); moviesList = parseMovies(parser); return moviesList; } private List<Movie> parseMovies(XmlPullParser parser) throws XmlPullParserException, IOException { List<Movie> moviesList = new LinkedList<Movie>(); Log.d(TAG, "parseMovies tag " + parser.getName()); while (parser.nextTag() == XmlPullParser.START_TAG) { Log.d(TAG, "parsing movie"); Movie movie = parseMovie(parser); moviesList.add(movie); } return moviesList; } private Movie parseMovie(XmlPullParser parser) throws XmlPullParserException, IOException { Movie movie = new Movie(); Log.d(TAG, "parseMovie tag " + parser.getName()); while (parser.nextTag() == XmlPullParser.START_TAG) { if (parser.getName().equals("name")) { movie.name = parser.nextText(); } else if (parser.getName().equals("score")) { movie.score = parser.nextText(); } else if (parser.getName().equals("images")) { Image image = parseImage(parser); movie.imagesList = new ArrayList<Image>(); movie.imagesList.add(image); } else if (parser.getName().equals("version")) { movie.version = parser.nextText(); } else { parser.nextText(); } } return movie; } private Image parseImage(XmlPullParser parser) throws XmlPullParserException, IOException { Image image = new Image(); Log.d(TAG, "parseImage tag " + parser.getName()); while (parser.nextTag() == XmlPullParser.START_TAG) { if (parser.getName().equals("image")) { image.type = parser.getAttributeValue(null, "type"); image.url = parser.getAttributeValue(null, "url"); image.size = parser.getAttributeValue(null, "size"); image.width = Integer.parseInt(parser.getAttributeValue(null, "width")); image.height = Integer.parseInt(parser.getAttributeValue(null, "height")); } parser.next(); } return image; } } The code is pretty straightforward, just remember that we are using a “drill-down” approach in order to parse the deeper element (Movies ? Movie ? Images). Please note that in the Movie parsing method we included only some of the fields for brevity reasons. Also, do not forget to invoke the parser.nextText() method in order to allow the parser to move and fetch the next tag (else you will get some nasty exceptions since the current event will not be of type START_TAG). Run the project’s configuration again and check that the LogCat contains the correct debugging statements: That’s it! XML Pull parsing capabilities straight to your Android application. You can download the Eclipse project created for this article here. please i need your main file xml please i don’t know how to see my result without the contain of your main file thanks !!
http://www.javacodegeeks.com/2010/11/boost-android-xml-parsing-xml-pull.html
CC-MAIN-2015-11
refinedweb
1,814
51.44
!news.sprintlink.net!Sprint!151.164.1.5!news1.rcsntx.swbell.net!news.pbi.net!hunter.premier.net!hammer.uoregon.edu!ais.net!howland.erols.net!blackbush.xlink.net!ka.sub.net!rz.uni-karlsruhe.de!not-for-mail From: ig25@fg70.rz.uni-karlsruhe.de (Thomas Koenig) Newsgroups: comp.graphics.apps.gnuplot,comp.answers,news.answers Subject: comp.graphics.apps.gnuplot FAQ (Frequent Answered Questions) Supersedes: <comp-graphics-gnuplot-faq_862849823@fg70.rz.uni-karlsruhe.de> Followup-To: comp.graphics.apps.gnuplot Date: 25 Jun 1997 15:25:39 +0200 Organization: University of Karlsruhe, Germany Lines: 1346 Approved: news-answers-request@MIT.Edu Expires: 23 Jul 1997 13:25:36 GMT Message-ID: <comp-graphics-gnuplot-faq_867245136@fg70.rz.uni-karlsruhe.de> Reply-To: ig25 Summary: This is the FAQ (Frequently Answered Questions) list of the comp.graphics.gnuplot newsgroup, which discusses the gnuplot program for plotting 2D - and 3D - graphs. Keywords: computer graphics, gnuplot Xref: senator-bedfellow.mit.edu comp.graphics.apps.gnuplot:7460 comp.answers:26735 news.answers:105792 View main headers comp.graphics.apps.gnuplot comp.graphics.apps.gnuplot FAQ (Frequent Answered Questions) This is the FAQ (Frequently Answered Questions) list of the comp.graphics.apps.gnuplot newsgroup, which discusses the gnuplot program for plotting 2D - and 3D - graphs. Most of the information in this document came from public discussion on comp.graphics.apps.gnuplot; quotations are believed to be in the public domain. If you are reading this via WWW, and you can't access the individual pages, please select here, then try again. Here's a list of the questions. If you are looking for the answer for a specific question, look for the string Qx.x: at the beginning of a line, with x.x being the question number. Sections in this FAQ are * 0. Meta-Questions * 1. General Information * 2. Setting it up * 3. Working with it * 4. Wanted features * 5. Miscellaneous * 6. Making life easier * 7. Known problems * 8. Credits Questions: Section 0: Meta - Questions * Q0.1: Where do I get this document? * Q0.2: Where do I send comments about this document? Section 1: General Information * Q1.1: What is gnuplot? * Q1.2: How did it come about and why is it called gnuplot? * Q1.3: Does gnuplot have anything to do with the FSF and the GNU project? * Q1.4: What does gnuplot offer? * Q1.5: Is gnuplot suitable for batch processing? * Q1.6: Can I run gnuplot on my computer? Section 2: Setting it up * Q2.1: What is the current version of gnuplot? * Q2.2: Where can I get gnuplot? * Q2.3: How do I get gnuplot to compile on my system? * Q2.4: What documentation is there, and how do I get it? Section 3: Working with it * Q3.1: How do I get help? * Q3.2: How do I print out my graphs? * Q3.3: How do I include my graphs in <word processor>? * Q3.4: How do I post-process a gnuplot graph? * Q3.5: How do I change symbol size, line thickness and the like? * Q3.6: How do I generate plots in GIF format? Section 4: Wanted features * Q4.0: What's new in gnuplot 3.6? * Q4.1: Does gnuplot have hidden line removal? * Q4.2: Does gnuplot support bar-charts/histograms/boxes? * Q4.3: Does gnuplot support multiple y-axes on a single plot? * Q4.4: Can I put multiple plots on a single page? * Q4.5: Can I put both data files and commands into a single file? * Q4.6: Can I put Greek letters and super/subscripts into my labels? * Q4.7 Can I do 1:1 scaling of axes? * Q4.8: Can I put tic marks for x and y axes into 3d plots? * Q4.9: Does gnuplot support a driver for <graphics format>? * Q4.10: Can I put different text sizes into my plots? * Q4.11: How do I modify gnuplot? * Q4.12: How do I skip data points? Section 5: Miscellaneous * Q5.1: I've found a bug, what do I do? * Q5.2: Can I use gnuplot routines for my own programs? * Q5.3: What extensions have people made to gnuplot? Where can I get them? * Q5.4: Can I do heavy-duty data processing with gnuplot? * Q5.5: I have ported gnuplot to another system, or patched it. What do I do? * Q5.6: I want to help in developing gnuplot 3.6. What can I do? Section 6: Making life easier * Q6.1: How do I plot two functions in non-overlapping regions? * Q6.2: How do I run my data through a filter before plotting? * Q6.3: How do I make it easier to use gnuplot with LaTeX? * Q6.4: How do I save and restore my settings? * Q6.5: How do I plot lines (not grids) using splot? * Q6.6: How do I plot a function f(x,y) which is bounded by other functions in the x-y plain? * Q6.7: How do I get rid of <feature in a plot>? * Q6.8: How do I call gnuplot from my own programs ? Section 7: Known Problems * Q7.1: Gnuplot is not plotting any points under X11! How come? * Q7.2: My isoline data generated by a Fortran program is not handled correctly. What can I do? * Q7.3: Why does gnuplot ignore my very small numbers? * Q7.4: Gnuplot is plotting nothing when run via gnuplot <filename>! What can I do? * Q7.5: My formulas are giving me nonsense results! What's going on? * Q7.6: My Linux gnuplot complains about a missing gnuplot_x11. What is wrong? * Q7.7: set output 'filename' isn't outputting everything it should! Section 8: Credits Section 0: Meta-Questions. Q0.1: Where do I get this document? This document is posted about once every two weeks to the newsgroups comp.graphics.apps.gnuplot, comp.answers and news.answers. Like many other FAQ's, its newest (plaintext) version is available via anonymous ftp from If you have access to the WWW, you can get the newest version of this document from Q0.2: Where do I send comments about this document? Send comments, suggestions etc. via e-mail to Thomas Koenig, Thomas.Koenig@ciw.uni-karlsruhe.de or ig25@dkauni2.bitnet. Section 1: General Information Q1.1: What is for it. Q1.2: How did it come about and why is it called gnuplot?. Q1.3: Does gnuplot have anything to do with the FSF and the GNU project? Gnuplot is neither written nor maintained by the FSF. It is not covered by the General Public License, either. However, the FSF has decided to distribute gnuplot as part of the GNU system, because it is useful, redistributable software. Q1.4: What does gnuplot offer? + Plotting of two-dimensional functions and data points in many different styles (points, lines, error bars) + plotting of three-dimensional data points and surfaces in many different styles (contour plot, mesh). + support for complex arithmetic + self - defined functions + support for a large number of operating systems, graphics file formats and devices + extensive on-line help + labels for title, axes, data points + command line editing and history on most platforms Q1.5: Is gnuplot suitable for batch processing? Yes. You can read in files from the command line, or you can redirect your standard input to read from a file. Both data and command files can be generated automatically, from data acquisition programs or whatever else you use. Q1.6: Can I run gnuplot on my computer? Gnuplot is available for a number of platforms. These are: Unix (X11 and NeXTSTEP), VAX/VMS, OS/2, MS-DOS, Amiga, MS-Windows, OS-9/68k, Atari ST and the Macintosh. Modifications for NEC PC-9801 are said to exist (where?). Section 2: Setting it up Q2.1: What is the current version of gnuplot? The current version of gnuplot is 3.5, which is a bugfix release over 3.4. Version 3.6 is in beta status. Please note that this is still unstable, and may not compile correctly on your system. Q2.2: Where can I get gnuplot? All of the later addresses refer to ftp sites. Please note that it is preferable for you to use the symbolic name, rather than the IP address given in brackets, because that address is much more subject to change. The official distribution site for the gnuplot source is [129.170.16.4, soon to be 129.170.8.11], the file is called /pub/gnuplot/gnuplot3.5.tar.Z. Official mirrors of that distribution are (for Australia) [130.194.11.18] and (for Europe) irisa.irisa.fr [131.254.254.2]. You can also get it from your friendly neighbourhood comp.sources.misc archive. MS-DOS and MS-Windows binaries are available from + oak.oakland.edu (North America) [141.210.10.117] as /Simtel/msdos/plot/gpt35*.zip, + garbo.uwasa.fi (Europe) [193.166.120.5] as /pc/plot/gpt35*.zip and + archie.au (Australia) [139.130.4.6] as micros/pc/oak/plot/gpt35*.zip. The files are: gpt35doc.zip, gpt35exe.zip, gpt35src.zip and gpt35win.zip. There is a special MS-DOS version for 386 or better processors; it is available from the official gnuplot sites as DOS34.zip. OS/2 2.x binaries are at ftp-os2.nmsu.edu [128.123.35.151], in /os2/2.x/unix/gnuplt35.zip. Amiga sources and binaries are available from [128.252.135.4] as /pub/aminet/util/gnu/gnuplot-3.5.lha; there are numerous mirrors of this distribution, for example, oes.orst.edu or. The NeXTSTEP front end can be found at as Gnuplot1.2_bin.tar.Z. A version for OS-9/68K can be found at cabrales.cs.wisc.edu [128.105.36.20] as /pub/OSK/GRAPHICS/gnuplot32x.tar.Z; it includes both X-Windows and non - X-windows versions. There is a version for the Macintosh at which includes binaries for 68000-based Macs with and without FPU and native support for PowerMacs. Versions for the Atari ST and TT, which include some GEM windowing support, are available from, as gplt35st.zip and gplt35tt.zip. They work best under MiNT. Executable files, plus documentation in Japanese, exist for the X680x0 on 2. People without ftp access can use an ftp-mail server; send a message saying 'help' to bitftp@pucc.bitnet (for BITNET only) or to ftpmail@. For a uuencoded copy of the the gnuplot sources (compressed tar file), send this as the body of a message to ftpmail@: open cd pub/gnuplot mode binary get gnuplot3.5.tar.Z quit If you have some problem, you might need to stick reply-to <your-email-address-here> before all the above.. You can obtain a beta release of gnuplot 3.6 from. Q2.3: How do I get gnuplot to compile on my system? As you would any other installation. Read the files README and README.Install, edit the Makefile according to taste, and run make or whatever is suitable for your operating system. If you get a complaint about a missing file libplot.a or something similar when building gnuplot for X11, remove -DUNIXPLOT from the TERMFLAGS= line, remove -lplot from the DTBS= line and run again. If you are making X11 on a sun, type 'make x11_sun'. For compiling gnuplot under Irix 5.2 and Irix 5.3, there is a patch in the file lvs.zip in the contrib directory at. Q2.4: What documentation is there, and how do I get it? The documentation is included in the source distribution. Look at the docs subdirectory, where you'll find + a Unix man page, which says how to start gnuplot + a help file, which also can be printed as a manual + a tutorial on using gnuplot with LaTeX + a quick reference summary sheet for TeX only PostScript copies of the documentation can be ftp'd from, in pub/gnuplot, as manual.ps.Z and tutorial.ps.Z Andy Liaw and Dick Crawford have written a 16-page user's guide. It is available from as gptug.tex (also get example.tex from the same directory), gptug.dvi or gptug.ps. At the same site, there's a two- page instruction sheet for the enhpost PostScript driver (see Q4.6 ) as enhpost.guide.ps and a short guide to gnuplot PostScript files, as gp-ps.doc. A Chinese translation of the gnuplot manual can be found on e/gnuplot.ps.gz . There is a WWW hompepage for gnuplot at, which includes the reference manual and a demo. There are two more Chinese documents about gnuplot: a 72 - page User's guide and a 28 - page Touring Guide. Both documents are in PostScript format and gzipped. Section 3: Working with it Q3.1: How do I get help?. to the gatewayed mailing list info-gnuplot@dartmouth.edu. pose a question there, it is considered good form to solicit e-mail replies and post a summary. Q3.2: How do I print out my graphs?> Q3.3: How do I include my graphs in <word processor>? Basically, you save your plot to a file in a format your word processor can understand (using "set term" and "set output", see above), and then you read in the plot from your word processor.. There are many mirrors, e.g. or ..-7dec1993.tar.gz. Check archie (see Q2.2 ) for an archive site near you. Q3.4: How do I post-process a gnuplot graph? This depends on the terminal type you use. You can use the terminal type fig (you may need to recompile gnuplot to enable this terminal type, by putting #define FIG into <term.h>), and use the xfig drawing program to edit the plot afterwards. For PostScript output, you may be able to use the pstotgif script (which calls GhostScript) to convert PostScript into the format of the tgif drawing program. Tgif is also able to save in PostScript format. Both tgif and xfig can be obtained from the X Window contrib distribution (see Q3.3). Another possibility for modifying PostScript output appears to be IslandDraw, a commercial drawing program for UNIX workstations. For Windows, there is another alternative, PageDraw. It can post-process AI (Adobe Illustrator) files, and has a converter from PostScript to AI. It can be downloaded from. Q3.5: How do I change symbol size, line thickness and the like? Again, this depends on the terminal type. For PostScript, you can edit the generated PostScript file. An overview of what means what in the PostScript files gnuplot generates can be found at as gs-ps.doc. A general introduction to PostScript can be found at ts/ as 11-92.ps.Z. Q3.6: How do I generate plots in GIF format? In gnuplot version 3.5, use the pbm terminal and use the PBMPLUS package or other utilities to convert the resulting bitmap (see Q 3.3 for how to get the PBMPLUS package). In 3.6, there will be a gif terminal. Section 4: Wanted features Q4.0: What's new in gnuplot 3.6? Here's the WhatsNew file of the current beta release, patchlevel 213. What's new in 3.6 ? Still to do Many terminals to be converted to new layout someone has contributed a new hidden-line-removal '-' as load filename / command-line param key for splot with contour is ugly timeseries stuff has got broken (well, non-portable code) terminals are no longer allowed to do their own scaling Plus quite a few contributed patches that I haven't yet installed (sorry) in 194 multiplot for splot in 188 os9 port set xrange [] reverse writeback allow mix of co-ordinate systems within an arrow/label posn initial multiplot support - doesn't yet check that terminal is capable, but there is a flags field added to the terminal entry to tell gnuplot about this. also, suspend() / resume() entry points which are to be called between plots of a multiplot. in 178 arbitrary length/number of columns in datafile accept double/quad-precision fortran numbers (1.23{dDqQ}4) - but not in scanf format string undefined fit parameters start at 1 rather than 1e-30 - more chance of convergence / less change of unitary matrix WIN32 / Win-NT support table output can be read back in for data splot - hence gnuplot can be used to dgrid a datafile and write it out set missing 'string' - nominate a token as standing for missing values in datafile - not yet added to documentation updates to time-series stuff (so it doesn't break at 2000) - except it has become horribly non-portable :-( split graph3d.c into util3d.c and hidden3d.c in 166 set bar <size> - a number rather than just small or large allow different linetypes for grid at major and minor tics a few more set no* commands for consistency. initial go at implementing tic mirrors and axes for splot - no ztic axis yet (or no zzeroaxis) - tics on axes are not hidden by surface attempt to make sin(x) behave as expected when set angle degrees - gives answers if x is complex, but I dont know if they are correct - acos(cos(x)) seems to give x, so at least its consistent - fix a bug which made acos(cos({0,1})) undefined new grass.trm in release 162/164 set size [{no}square] x,y - tries to plot with aspect ratio 1 - seems to work great for postscript - please check with your favourite driver - uses relative sizes of tics to determine required size. posn for key, labels and arrows can be in one of 4 co-ordinate systems - first_axes (default) - second_axes (for plot..second) - graph (0,0 -> 1,1 = plotting area) - screen (0,0 -> 1,1 = whole screen) - arrows needn't have endpoints in same co-ords. see help set label via is now a required keyword for fit - fit f(x) 'file' ... via { 'file' | a,b,... } - this is to avoid confusing 'file' with 'using-format-string' win32 and 16-bit dos fixes - I can compile with tc++, but get an overlay error at runtime. new set of documentation programs (I haven't tried them) various tweaks to makefile changes to pslatex - substitute .ps at _last_ . in filename - accept font size of enclosing document as an option. - dont forget to close aux file in release 151 linux security patch can specify font for labels, etc (postscript only ? - I haven't tried this) can specify linetype to draw grid / zeroaxes / arrows emx terminal driver first attempt at pipes for VMS and vector style - needs more work l/b/r/t-margin in place of xmargin - more control over size of margins incompatible changes to polar mode: - t is now the dummy variable, so x is width of plot as expected - tics are not automatically on axes - set {xy}tics axis nomirror - grid is not automatically polar - set grid x [mx] polar [angle] - no numbers on grid - they were always in degrees second axes - x2 and y2 are an independent pair of axes, but they inherit ranges from x and y if no second data - there can be problems with this, actually - if x2tics are not shown, x2range is not autoextended to whole number of tics, so same data might not have same range. - set x2tics/y2tics/x2label/y2label - set [no]log x2 / y2 - plot [first,] f(x), 'file', ..., second, g(x), ... - get specify grid at any/all of x,y,x2,y2 - see electron.dem set border <mask> - 12 bit binary number selects 12 sides of cube around splo t can specify grid z, to get a grid on back wall of splot set mxtics [<interval>|default] | set nomxtics - set mxtics gives auto for logscale, fixed for linear binary, index and every keywords to datafiles. - every also works with binary files can use '-' as datafile for inline data (ends at line with e) can use '' to mean reuse previous file splot and fit now use datafile module - FIT_SKIP no longer supported - use fit f(x) 'file' every n can limit fit range using fit [variable=min:max] f(variable) ... set ticscale <major> [<minor>] surface is clipped with no hidden line removal - still to do contour and hidden-line surface set {x|y|x2|y2} [axis|border] [no]mirror - can put tics on border or axes - mirror controls mirroring of tics on opposite axis - no longer coupled to set tics out setiing. No longer need to specify parametric mode for 3-column data files. ranges automatically extended to whole number of tic intervals - doesn't always manage to drop vertical from surface to corner of base - workaround is either specify range or use set border patchlevel 140 -------------- I've probably missed a lot of features since I'm so used to them. Plus I never bothered with 3.5 so some of these may have been there. some of these may have made it into the documentation Here goes: fit f(x) 'file' via ... read and plot time data (timedat.dem) set key [top|bottom|under] [left|right|out] [reverse] [box [<linetype>]] set key title 'text' Processing of escape sequences in "strings" but not 'strings' - TeX users in particular advised to use '' Multiline labels, etc, using "first\nsecond" enhpost driver call command (load with parameters) x error bars. splines. boxes. [some may have been in 3.5] pipes for amiga the using patch plot 'file' using spec:spec:... - spec is either column number or (expression in $1, $2, ...) new pslatex driver with postscript to aux file. set pointsize <scale factor> on some terminals doubles in plot...using format string - %lf unlimited input line length and expression (action) table minor tic-marks (like logscale but also for linear) - also set grid [mx|my] that's all I can think of for the moment... Q4.1: Does gnuplot have hidden line removal? Version 3.5 supports hidden line removal on all platforms except MS-DOS; use the command set hidden3d If someone can solve the 64K DGROUP memory problem, gnuplot would support hidden line removal on MS-DOS as well. Version 3.2 supports limited hidden line removal. Q4.2: Does gnuplot support bar-charts/histograms/boxes? As of version 3.4, it does; use the style "with boxes" for bar charts. To get filled boxes, you can try a modification by Steve Cumming, available via ftp from as box.tar. Q4.3: Does gnuplot support multiple y-axes on a single plot? Yes, with two unofficial mods, multiplot.shar and borders.shar. They can be obtained from or. Also, 3.6 supports this capability. Q4.4: Can I put multiple plots on a single page? Yes, with the multiplot.shar mod, or if you are running gnuplot 3.6. If you are using PostScript output, check out mpage, which can be ftp'd from Q4.5: Can I put both data files and commands into a single file? This feature is in gnuplot 3.6. Q4.6: Can I put Greek letters and super/subscripts into my labels? You might try using the LaTeX terminal type and putting text like \alpha_{3} into it. David Denholm has written a PostScript terminal which allows for super/and subscripts, such as a^x or {/Symbol a }. Ftp to sotona.phys.soton.ac.uk [152.78.192.42] and get enhpost.trm, written by David Denholm and Matt Heffron. To install it, follow the instructions at the top of the file, then recompile. enhpost is also included in gnuplot 3.6. Q4.7: Can I do 1:1 scaling of axes? Not easily in 3.5; in 3.6, you can use "set size square". Q4.8: Can I put tic marks for x and y axes into 3d plots? In version 3.5, you can; use the "with boxes" option. Q4.9: Does gnuplot support a driver for <graphics format>?. Q4.10: Can I put different text sizes into my plots? If you use PostScript output, you can use Dave Denholm's and Matt Heffron's updated PostScript driver, /sotona.phys.soton.ac.uk:/enhpost.trm (see also Q4.6 ). Else, use 3.6. Q4.11 How do I modify gnuplot, and apply 'patches'?. Q4.12 How do I skip data points? By specifying ? as a data value, as in 1 2 2 3 3 ? 4 5 Q4.13 How do I plot every nth point? You can apply the patch point_skip from the contrib section (see Q5.3 or, assuming you have awk installed on your system, you can use the following line: gnuplot> plot "< awk '{if(NR%5==0)print}' file.dat" plots every 5th line, and gnuplot> plot "< awk '$0 !~ /^#/ {if(NR%40==0)print $1, $4}' file.dat" plots every 40th line while skipping commented lines. Section 5: Miscellaneous Q5.1: I've found a bug, what do I do? First, try to see whether it actually is a bug, or whether it is a feature which may be turned off by some obscure set - command. Next, see wether you have an old version of gnuplot; if you do, chances are the bug has been fixed in a newer release.), terminal driver, operating system, an exact description of the bug and input which can reproduce the bug. Also, any context diffs should be referenced against the latest official version of gnuplot if at all possible. Q5.2: Can I use gnuplot routines for my own programs? Yes. John Campbell <jdc@nauvax.ucc.nau.edu> has written gplotlib, a version of gnuplot as C subroutines callable from a C program. This is available as gplotlib.tar.Z on the machine in the directory /pub/gplotlib.tar.Z. This library has been updated to be compatible with version 3.5. Q5.3: What extensions have people made to gnuplot? Where can I get them? __Extensions are available from . It contains the following files: Point Skips + _Data Filtering_ Instead of just having two params following the style param, there are now 4: o 1: line_type o 2: point_type o 3: point_skip - gives the number of data samples per plotted point o 4: point_offs - gives the sample number on which to plot the first point Thus points are plotted only for the samples n satisfying n = point_skip*i + point_offs for some non-negative integer i. From: pixar!sun!prony.Colorado.EDU!clarkmp@ucbvax.berkeley.edu (Michael Clark) + _Point Skip with Awk_ With UNIX, gnuplot> plot "< awk '{if(NR%5==0)print$0}' file.dat" From: James Darrell McCauley, mccauley@ecn.purdue.edu + _New Xlib mods._ From: gregg hanna (gregor@kafka.saic.com) Vectors and Arrows + _Program to convert lines to vectors_ This program turns line segments into line segments with a half-arrow at the head: by uncommenting two lines below, the arrowhead will be a triangle. optional arguments: size angle where size is a fraction of each vector's magnitude and angle is in degrees all data taken from standard input, and output to standard output. typical invocation: arrow 0.2 15 <vector.lin >vector.heads From: andrew@jarthur.claremont.edu (Andrew M. Ross) + _Vect2gp_, an awk script to make gnuplot command script to draw a vector field map. From: hiro@ice3.ori.u-tokyo.ac.jp (Yasu-Hiro YAMAZAKI) + _GNUPLOT to SIPP_ This is a "far from perfect" converter that takes gnuplot table output and splits it in polygons. Then it calls sipp to render it. You get sipp from isy.liu.se:/pub/sipp or ask archie. From: chammer@POST.uni-bielefeld.de (Carsten Hammer) Histograms and Pie Charts + _Histogram C program_ The short C program below is a filter that calculates a histogram from a sequence of numbers and prints the output in such a format that Gnuplot can plot the histogram by the command sequence !histogram < datain > tmp; plot "tmp" with impulses From: mustafa@seas.smu.edu (Mustafa Kocaturk) + _HG_ is an automatic histogram generator. it reads a column of data from an input file and emits a [log] histogram ks does ks or chi^2 tests on a set of input arrays. you need the "numerical recipes in C" library somewhere on your system to link this one. I can not undertake to fix bugs or add features, but I might do it if asked. From: Steve Cumming stevec@geog.ubc.ca + _Piechart C program_ The short C program below formats data for display as a piechart. From: mccauley@ecn.purdue.edu (James Darrell McCauley) Interprocess Communications + _Notes of Windows Hooks_ From: Maurice Castro,maurice@bruce.cs.monash.edu.au + _Named Pipes Example _From: dtaber@deathstar.risc.rockwell.com (Don Taber) + _PipeLib_ What the library does is set up to 20 programs going (like gnuplot), then allows you to send to them as if the program were typing on the command line. I've included a brief set of docs after the source code, in latex format. There is no facility to watch the output of a program. From: ssclift@neumann.uwaterloo.ca (Simon Clift) + _Popen example from lsqrfit_ The following function sends a command to gnuplot. Gnuplot will execute the command just as if you typed it at the gnuplot command line. This example is adapted from my least squares fitting program which is located at in pub/os2/2_x/unix/lsqrft14.zip. Complete source is included. From: michael@krypton.mit.edu (Michael Courtney) Multiple logical plots on a single page + _Gawk script for multiple encapsulated postscript on a page_ It's slightly more flexible than mpage, because it changes the aspect ratio of the plots; mpage according to the documentation only allows 1, 2, 4, or 8 plots on a page. This script works for unix with encapsulated postscript (eps) output. It should work with gawk or nawk, although I've only tested it with gawk. (Gawk is GNU's version of awk and is available from prep.ai.mit.edu.) You just specify how many rows and columns of plots you want and it does the rest. For example, gnuplot_eps rows=3 cols=2 *.eps | lpr will print all eps files in your current directory with 6 on a page. Also, see the comments in the file. From: holt@goethe.cns.caltech.edu (Gary Holt) + _Sed script for multiple encapsulated postscript on a page_ You have MULTIPLE postscript files each containing a single plot. From: wgchoe@scoupe.postech.ac.kr (Choe Won Gyu) + _Massive patch_ with add multiplotcapability to all devices and a lot more. The reason it is offered in this form is because the original multiplot.pat did not patch correctly into gnuplot version 3.5. This mod also add borders options, financial plots, multiple line titles and other asundry items. Use at your own risk. Look at the top of makefile.r for a more complete list of changes. From: Alex Woo, woo@playfair.stanford.edu lvs.zip This contains miscellaneous, modifications, which include: + Label positioning using either plot or device-relative coodinates + Portability to Irix-5.2 and Irix-5.3 + The "thru" keyword has been extended to include "thrux" for the X - Coordinate + Capability to read a ordinary Fortran-style unformatted file + A Perl script for better handling of eps + Modifications to docs/doc2info to generate "next", "prev", and "up" data for each node. + Changes in the documentation to reflect the above. Miscellaneous Mods + _Congp3d3_ is a preprocessor to draw contour plots on irregular regions. From: mrb2@nrc.gov (Margaret Rose Byrne) + _Sockpipe_ is a socket based pipe needed for the Stardent OS. From: Mike Hallesy, Stardent Computer Product Support, hal@stardent.com + _Time Series_ is a patch to add multiline titles and labels, time series x and y data and tic marks, and automatic resizing of plots and much more. From: Hans Olav Eggestad, olav@jordforsk.nlh.no Other Operationing Systems + _MacIntosh Port of Version 3.2_ From: Noboru Yamamoto, sun!kekvax.kek.jp!YAMAMOTO@pixar.com + _MacIntosh Port of Version 3.5_ From: laval@londres.cma.fr (Philippe LAVAL) + _OS-9 Port of Version 3.2_ Q5.4: Can I do heavy - duty data processing with gnuplot? [132.206.9.13], and from the main Linux server, tsx-11.mit.edu [18.172.1.2] goes together with gnuplot; it is called gnufit and is available from the official gnuplot sites, as the files gnufit12.info, gnufit12.tar.gz (source) and gft12dos.zip (MS-DOS). It has been merged into gnuplot 3.6. Michael Courtney has written a program called lsqrft, which uses the Levenberg - Marquardt - Algorithm for fitting data to a function. It is avialiable from as /pub/os2/2_x/unix/lsqrft13.zip; sources, which should compile on Unix, and executables for MS-DOS and OS/2 2.x are included. There is an interface to the OS/2 presentation manager. You might also want to look at the applications developed by the Software Tools Group (STG) at the National Center for Supercomputing Applications. Ftp to [141.142.20.50] and get the file README.BROCHURE for more information. You can also try pgperl, an integration of the PGPLOT plotting package with Perl 5. Information can be found at, the source is available from or. via anonymous ftp from bevo.che.wisc.edu in the directory /pub/octave. Q5.5: I have ported gnuplot to another system, or patched it. What do I do? If your patch is small, mail it to bug-gnuplot), upload your modifications to. Please drop a note to David.Kotz@dartmouth.edu, the maintainer of the gnuplot subdirectory there, plus a note to bug-gnuplot@dartmouth.edu. Q5.6: I want to help in developing gnuplot 3.6. What can I do? Join the gnuplot beta test mailing list by sending a mail containing the line subscribe info-gnuplot-beta in the body (not the subject) of the mail to Majordomo@Dartmouth.EDU. Section 6: Making life easier Q6.1: How do I plot two functions in non - overlapping regions? Use a parametric plot. An example: set parametric a=1 b=3 c=2 d=4 x1(t) = a+(b-a)*t x2(t) = c+(d-c)*t f1(x) = sin(x) f2(x) = x**2/8 plot [t=0:1] x1(t),f1(x1(t)) title "f1", x2(t), f2(x2(t)) title "f2" Q6.2: How do I run my data through a filter before plotting? If your system supports the popen() function, as Unix does, you should be able to run the output through another process, for example a short awk program, such as gnuplot> plot "< awk ' { print $1, $3/$2 } ' file.in" Unfortunately, in 3.2, there is a rather short limitation on the maximum argument length, so your command line may be truncated (usually, this will mean that awk cannot find the filename). Also, you may need to escape the $ - characters in your awk programs. As of version 3.4, gnuplot includes the thru - keyword for the plot command for running data files through a gnuplot - defined function. You can also get divhack.patch from sotona.phys.soton.ac.uk[152.78.192.42] via anonymous ftp. It allows expressions of the kind gnuplot> plot "datafile" using A:B:C where A,B,C,... are now either a column number, as usual, or an arbitrary expression enclosed in ()'s, and using $1,$2,etc to access the data columns. Q6.3: How do I make it easier to use gnuplot with LaTeX? There is a set of LaTeX macros and shell scripts that are meant to make your life easier when using gnuplot with LaTeX. This package can be found on [129.170.16.54, soon to be 129.170.8.11]. Q6.4: How do I save and restore my settings? Use the "save" and "load" commands for this; see "help save" and "help load" for details. Q6.5: How do I plot lines (not grids) using splot?. Q6.6: How do I plot a function f(x,y) which is bounded by other functions in the x-y plain? An example: f(x,y) = x**2 + y **2 x(u) = 3*u yu(x) = x**2 yl(x) = -x**2 set parametric set cont splot [0:1] [0:1] u,yl(x(u))+(yu(x(u)) - yl(x(u)))*v,\ f(x(u), (yu(x(u)) - yl(x(u)))*v) Q6.7: How do I get rid of <feature in a plot>? Usually, there is a set command to do this; do a gnuplot> ?set no for a short overview. Q6.8: How do I call gnuplot from my own programs? Here's code which works for a UNIX system, using (efficient) named pipes. #include <sys/types.h> #include <sys/stat.h> #include <stdlib.h> #include <stdio.h> #include <math.h> #include <unistd.h> #define PANIC(a) do { \ perror(a); \ if (temp_name) unlink(temp_name);\ exit(1);\ } while(0) int main() { FILE *command,*data; char *temp_name = NULL; double a,b; int i; if ((temp_name = tmpnam((char *) 0)) == 0) PANIC("tmpnam failed"); if(mkfifo(temp_name, S_IRUSR | S_IWUSR) != 0) PANIC("mkfifo failed"); command = popen("gnuplot","w"); fprintf(command,"plot \"%s\" with lines\n",temp_name); fflush(command); data = fopen(temp_name,"w"); for (i=0; i<20; i++) { a = i/10.0; b = sin(a); fprintf(data,"%f %f\n",a,b); } fclose(data); fprintf(stderr,"press enter to continue..."); fflush(stderr); getchar(); fprintf(command,"plot \"%s\" with lines\n",temp_name); fflush(command); data = fopen(temp_name,"w"); for (i=0; i<20; i++) { a = i/10.0; b = cos(a); fprintf(data,"%f %f\n",a,b); } fclose(data); fprintf(stderr,"press enter to continue..."); fflush(stderr); getchar(); pclose(command); unlink(temp_name); return 0; } Here's code for OS/2, again using named pipes; I'm unable to check this out myself. This code is care of fearick@physci.uct.ac.za (Roger Fearick). #include <stdio.h> #define INCL_DOS #define INCL_DOSPROCESS #define INCL_DOSNMPIPES #include <os2.h> main() { HPIPE hpipe ; FILE *hfile, *hgnu ; /* create a named pipe. Use NP_WAIT so that DosConnect... blocks until client (gnuplot) opens, and client reads are blocked until data is available */ DosCreateNPipe( "\\pipe\\gtemp", &hpipe, NP_ACCESS_OUTBOUND, NP_WAIT|NP_TYPE_BYTE|1, 256, 256, -1 ) ; /* use stream i/o */ hfile = fdopen( hpipe, "w" ) ; /* start gnuplot; use unbuffered writes so we don't need to flush buffer after a command */ hgnu = popen( "gnuplot", "w" ) ; setvbuf( hgnu, NULL, _IONBF, 0 ) ; /* plot a set of data */ fprintf( hgnu, "plot '/pipe/gtemp'\n" ) ; /* issue plot command */ DosConnectNPipe( hpipe ) ; /* wait until 'file' opened */ fprintf( hfile, "1 1\n" ) ; /* write data to 'file' */ fprintf( hfile, "2 2\n" ) ; fprintf( hfile, "3 3\n" ) ; fprintf( hfile, "4 4\n" ) ; fflush( hfile ) ; /* flush buffer forces read */ DosSleep( 500 ) ; /* allow gnuplot to catch up */ DosDisConnectNPipe( hpipe ) ; /* disconnect this session */ fprintf( hgnu, "pause -1\n" ) ; /* admire plot */ /* plot another set of data */ fprintf( hgnu, "plot '/pipe/gtemp'\n" ) ; DosConnectNPipe( hpipe ) ; fprintf( hfile, "1 4\n" ) ; fprintf( hfile, "2 3\n" ) ; fprintf( hfile, "3 2\n" ) ; fprintf( hfile, "4 1\n" ) ; fflush( hfile ) ; DosSleep( 500 ) ; DosDisConnectNPipe( hpipe ) ; fprintf( hgnu, "pause -1\n" ) ; DosClose( hpipe ) ; pclose( hgnu ) ; } ; The above code works for gnuplot 3.5. In gnuplot 3.6, this can be greatly simplified, since data can be fed 'inline, as in plot '-' w l 1 1 2 3 3 4 e Section 7: Known problems Q7.1: Gnuplot is not plotting any points under X11! How come? Very probably, you still are using an old version of gnuplot_x11. Remove that, then do a full installation. On VMS, you need to make several symbols: $ gnuplot_x11 :== $disk:[directory]gnuplot_x11 $ gnuplot :== $disk:[directory]gnuplot.exe $ def/job GNUPLOT$HELP disk:[directory]gnuplot.hlb Then run gnuplot from your command line, and use gnuplot> set term x11 Q7.2: My isoline data generated by a Fortran program is not handled correctly. What can I do? One known cause for this is the use of list-directed output (as in WRITE(10,*) for generating blank lines. Fortran uses ASA carriage control characters, and for list - directed output this results in a space being output before the newline. Gnuplot does not like this. The solution is to generate blank lines using formatted output, as in WRITE(10,'()'). If you use carriage return files in VMS Fortran, you may have to open the file with OPEN(...,CARRIAGECONTROL='DTST') or convert it using the DECUS utility ATTRIB.EXE: VMS> ATTRIB/RATTRIB=IMPDTED FOR010.DAT Q7.3: Why does gnuplot ignore my very small numbers?, gnuplot> help set zero Q7.4: Gnuplot is plotting nothing when run via gnuplot <filename>! What can I do? Put a pause -1 after the plot command in the file. Q7.5: My formulas are giving me nonsense results! What's going. Q7.6: My Linux gnuplot complains about a missing gnuplot_x11. What is wrong? The binary gnuplot distribution from sunsite.unc.edu and its mirrors in Linux/apps/math/gplotbin.tgz is missing one executable that is necessary to access the x11 terminal. Please install gnuplot from another Linux distribution, e.g. Slackware. Q7.7: set output 'filename' isn't outputting everything it should! You need to flush the output with a closing 'set output'. Section 8: Credits; Axel Eble and Jutta Zimmermann helped with the conversion to HTML. Thomas Koenig, ig25@rz.uni-karlsruhe.de, 1994-03-28 -- Thomas Koenig, Thomas.Koenig@ciw.uni-karlsruhe.de, ig25@dkauni2.bitnet. The joy of engineering is to find a straight line on a double logarithmic diagram.
http://www.faqs.org/faqs/graphics/gnuplot-faq/
CC-MAIN-2014-41
refinedweb
6,900
67.86
Visualize Raw data¶ import os.path as op import numpy as np import mne data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample') raw = mne.io.read_raw_fif(op.join(data_path, 'sample_audvis_raw.fif'), preload=True) raw.set_eeg_reference('average', projection=True) # set EEG average reference Out:. Current compensation grade : 0 Reading 0 ... 166799 = 0.000 ... 277.714 secs... Adding average EEG reference projection. 1 projection items deactivated Average reference projection was added, but has not been applied yet. Use the apply_proj method to apply it. The visualization module ( mne.viz) contains all the plotting functions that work in combination with MNE data structures. Usually the easiest way to use them is to call a method of the data container. All of the plotting method names start with plot. If you’re using Ipython console, you can just write raw.plot and ask the interpreter for suggestions with a tab key. To visually inspect your raw data, you can use the python equivalent of mne_browse_raw. raw.plot(block=True, lowpass=40) The channels are color coded by channel type. Generally MEG channels are colored in different shades of blue, whereas EEG channels are black. The scrollbar on right side of the browser window also tells us that two of the channels are marked as bad. Bad channels are color coded gray. By clicking the lines or channel names on the left, you can mark or unmark a bad channel interactively. You can use +/- keys to adjust the scale (also = works for magnifying the data). Note that the initial scaling factors can be set with parameter scalings. If you don’t know the scaling factor for channels, you can automatically set them by passing scalings=’auto’. With pageup/pagedown and home/end keys you can adjust the amount of data viewed at once. Drawing annotations¶ You can enter annotation mode by pressing a key. In annotation mode you can mark segments of data (and modify existing annotations) with the left mouse button. You can use the description of any existing annotation or create a new description by typing when the annotation dialog is active. Notice that the description starting with the keyword 'bad' means that the segment will be discarded when epoching the data. Existing annotations can be deleted with the right mouse button. Annotation mode is exited by pressing a again or closing the annotation window. See also mne.Annotations and Marking bad raw segments with annotations. To see all the interactive features, hit ? key or click help in the lower left corner of the browser window. Warning Annotations are modified in-place immediately at run-time. Deleted annotations cannot be retrieved after deletion. The channels are sorted by channel type by default. You can use the group_by parameter of raw.plot to group the channels in a different way. group_by='selection' uses the same channel groups as MNE-C’s mne_browse_raw (see Selection). The selections are defined in mne-python/mne/data/mne_analyze.sel and by modifying the channels there, you can define your own selection groups. Notice that this also affects the selections returned by mne.read_selection(). By default the selections only work for Neuromag data, but group_by='position' tries to mimic this behavior for any data with sensor positions available. The channels are grouped by sensor positions to 8 evenly sized regions. Notice that for this to work effectively, all the data channels in the channel array must be present. The order parameter allows to customize the order and select a subset of channels for plotting (picks). Here we use the butterfly mode and group the channels by position. To toggle between regular and butterfly modes, press ‘b’ key when the plotter window is active. Notice that group_by also affects the channel groupings in butterfly mode. raw.plot(butterfly=True, group_by='position'). events = mne.read_events(op.join(data_path, 'sample_audvis_raw-eve.fif')) event_id = {'A/L': 1, 'A/R': 2, 'V/L': 3, 'V/R': 4, 'S': 5, 'B': 32} raw.plot(butterfly=True, events=events, event_id=event_id) We can check where the channels reside with plot_sensors. Notice that this method (along with many other MNE plotting functions) is callable using any MNE data container where the channel information is available. raw.plot_sensors(kind='3d', ch_type='mag', ch_groups='position'). projs = mne.read_proj(op.join(data_path, 'sample_audvis_eog-proj.fif')) raw.add_proj(projs) raw.plot_projs_topomap() Out: Read a total of 6 projection items: EOG-planar-998--0.200-0.200-PCA-01 (1 x 203) idle EOG-planar-998--0.200-0.200-PCA-02 (1 x 203) idle EOG-axial-998--0.200-0.200-PCA-01 (1 x 102) idle EOG-axial-998--0.200-0.200-PCA-02 (1 x 102) idle EOG-eeg-998--0.200-0.200-PCA-01 (1 x 59) idle EOG-eeg-998--0.200-0.200-PCA-02 (1 x 59) idle 6 projection items deactivated Note that the projections in raw.info['projs'] can be visualized using raw.plot_projs_topomap or calling proj.plot_topomap more examples can be found in Read and visualize projections (SSP and other) projs[0].plot_topomap(). raw.plot(). Out: Effective window size : 3.410 (s) Effective window size : 3.410 (s) Effective window size : 3.410 (s). layout = mne.channels.read_layout('Vectorview-mag') layout.plot() raw.plot_psd_topo(tmax=30., fmin=5., fmax=60., n_fft=1024, layout=layout) Out: Effective window size : 1.705 (s) Total running time of the script: ( 0 minutes 23.092 seconds) Estimated memory usage: 1501 MB Gallery generated by Sphinx-Gallery
https://mne.tools/0.18/auto_tutorials/raw/plot_visualize_raw.html
CC-MAIN-2022-33
refinedweb
914
52.05
First of all thanks to Mediatek and Instructables for sending me one LinkIt ONE. It's really a great board and I've had a lot of fun playing with it. Now the project !!! The main idea is to make this little toy dinosaur watch for your chocolates. Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Components Needed For this project we'll need the next components. There are some explanations about what they do and some links with more information. - Mediatek LinkIt ONE. The heart of the project. It will manage all the other components. It will wait for the change of state of the photoresistor and activate the motor inside the dinosaur through the L298. Here you can read how to set up your LinkIt ONE or you can follow the steps from another instructables. - Keyes Photoresistor. It will detect if the chocolate bar is on place or not by detecting the amount of light. - Keyes L298 Motor driver. It allow us start, stop and change the direction of the motor. A little battery to power the dinosaur it's also needed. - A speaker and a power bank to power it. - And finally a chocolate bar or something big enough to be on top of the photoresistor and cover it. Step 2: Make the Connections First, build your dinosaur solar toy and remove the solar panel. After that you can connect the toy to a little battery and you will see how its moves. Later I will explain how to use the L298n motor drive to control it. In fact, the connections are very easy. For the dinosaur growl sound I just downloaded from and set the little switch on the LinkIt ONE to MS (Massive Storage) instead of UART and just copied to the drive unit that appears on Windows when you plug it. That's it. Of course, don't forget connect the speakers and their battery :P Connect the photoresistor to the A0 analog input like that LinkIt ONE tutorial Light Sensor. Connect the IN1 and IN2 from the L298n motor drive to the D8 and D9 from the LinkIt ONE and the ENA to the 5v. I don't know if it's a really good idea, because if you see the documentation of the L298n the ENA (Enable pin) allows you to control the speed of the motor with a PWM (Pulse With Modulation) signal, but works better than connecting directly to D3. Done !!! Not big deal if you have already been playing with these things. Next. Let's code. Step 3: The Code This code is very simple. Just three steps. First a few setup. #include < LAudio.h > // These pins will control the L298 int IN1=8; int IN2=9; //int ENA=3; // It seems there's not supply power enough so I connect directly to 5V and it works better. void setup() { LAudio.begin(); pinMode(IN1,OUTPUT); pinMode(IN2,OUTPUT); } //Then, in the main loop, we are reading the value of the photoresistor constantly and waiting for "light". //When it comes then play the sound and start the dinosaur movement. void loop() { int sensorValue = analogRead(A0); if (sensorValue<300) { LAudio.playFile( storageFlash,(char*)"dinosaur.wav"); LAudio.setVolume(6); //analogWrite(ENA, 200); // motor speed digitalWrite(IN1,LOW); // rotate forward digitalWrite(IN2,HIGH); delay(2000); digitalWrite(IN1,HIGH); // rotate reverse digitalWrite(IN2,LOW); delay(2000); }else // Finally stop the dinosaur if there is no "light" in the sensor. { digitalWrite(IN1,LOW); // stop motor digitalWrite(IN2,LOW); delay(2000); } } // Just upload to the LinkIt ONE and It's done !!! Whats next? Maybe connect it to the wifi and take the control of the dinosaur. There are some instructables here that sure will help. Discussions 4 years ago I love it. This is one of the best Linkitone projects that I have seen.
https://www.instructables.com/id/Chocolate-bar-dinosaur-watcher/
CC-MAIN-2019-47
refinedweb
656
73.98
Last week saw London's OSGi Community Event, in conjunction with JAX London. The conference presentations covered a wide range of environments, from Java EE migrations and cloud computing, down to embedded devices and Android. Enterprise Java on OGSi There has been a big upswing in using OSGi on the server side in recent years; not only is OSGi being used in the implementation of most Java EE platforms, but the application model is seeping through to run on OSGi runtimes directly. One of the progenitors of this is the release of the Enterprise Specification, which maps some common Java EE services to OSGi services (JTA, JPA, JNDI etc.). This has resulted in commonality between platform providers, with Apache Aries and Eclipse Gemini (used by Eclipse Virgo, formerly Spring DM Server) providing a migration path for those developing Java EE applications. A number of tutorials and presentations were aimed squarely at this audience, such as Ian Robinson's Apache Aries and Glyn Normington's Ecipse Virgo update talks. There were real-life stories behind migrating towards OSGi runtimes as well, with some advice for those wishing to move. First was Katya Todorova from SAP, who discussed migrating the NetWeaver platform to using OSGi. The initial approach – try to wrap everything and hide OSGi from the higher level – didn't work out, largely because they were trying to crowbar ways of working into the way OSGi is organised. The second attempt was much more successful; use OSGi as it is intended, and then provide OSGi services to interact with the components. (A long held belief, expressed by Peter Kriens, is that OSGi's μServices is the most important part of the specification, more so than modularity, which most focus on.) Gerd Kachel talked about moving an application from JBoss to OSGi. His observation was that a well structured project (with multiple components) was often easier to migrate to OSGi than the “big ball of mud” components because the work in splitting apart the modules has already been done. However, once you move to OSGi, this migration is enforced at both compile-time and run-time. In Gerd's case, his application was already segregated using MBean services, which lent itself to a simple mapping to OSGi services, using attributes as service properties, methods as methods on the service interface, and dependencies as related services. Both Katya and Gerd summarised their experiences to moving to OSGi with: - It takes around 1 month for a developer to be proficient with OSGi - Given an existing module, it can be converted into an OSGi service in around 1 day - If you fight the OSGi model, you will lose. If you work with it, you will succeed. - If your application is well structured, then there's an easier mapping to bundles than if it's not well structured. - If you are using MBeans, then there's a simple mapping strategy to OSGi services. - If you use classloaders in your application already, migrating to OSGi and fighting its classloaders can be problematic. - If you use dynamic class loading (class.forName) then these often need to be remediated, but can map to OSGi services. - If you are using an IOC model already (Guice, Spring etc.) then migrating is usually a lot easier. At the other end At the other end of the OSGi spectrum lies the embedded and micro devices. Several presentations focussed on OSGi in industrial or embedded applications, such as Bernhard Dorminger's Experiences with OSGi in Industrial Applications, Dimitar Valtchev's OSGi for Home Automation Systems and Takefumi Yamazaki's i-House experiments on OSGi. These use smaller embedded systems, such as ProSyst's mBS runtime. The mobile space was covered with a couple of talks on OSGi and Android by Andy Piper, and OSGi ME by Andre Bottaro. These were also focussed on low powered devices, but more formally on the mobile space. Further discussions, prompted also by the Apache Harmony project's modularity, led to informal proposals of having a number of smaller profiles than the J2SE-1.5, where it would be possible to embellish lesser profiles (like Foundation-1.0) with packages from newer runtimes (like java.beans). Since OSGi implicitly doesn't import packages from the java.* namespace, this can't be done easily with Import-Package – but may be possible with the newer OSGi 4.3 Require-Capability header. Keynotes and Updates Jim Colson presented The long and winding road, which gave an historical account of the evolution of OSGi, from its inception in 1999 as JSR 8 through to the adoption of OSGi in Eclipse 3.0 and on to its current state today in the enterprise. Whilst those involved with Eclipse and OSGi in those early days found it an interesting walk down memory lane, it shows how far OSGi adoption has come in the last five years. Now, every Java EE vendor bases their runtime on OSGi (well, except for JBoss, who are writing their own OSGi runtime). Jim also invited Tim Ellison from the Apache Harmony project to present the state of play. With 99% of the Java 5 API complete, and 96% of the Java 6 API complete (and 100% uncertified), the Java runtime is able to run most applications, including Eclipse-based RCP applications and web servers. Importantly, unlike the all-in-one Oracle JDK, the Harmony xDK can be pared down to its minimum of 10Mb class libraries and VM. These are packaged into several groups, known as Harmony Select, Harmony Core, Harmony More and Harmony Out. Only the Harmony Out includes any UI code (and even that is unnecessary for SWT's use) with ever growing sets of dependencies. James Governor gave an energetic keynote on the significance of OSGi (the general atmosphere of which is reflected in his blog post Java the Unipolar Moment), whilst Peter Kriens gave a talk on the future of OSGi, covering some of the points raised in the previous bundle.update. Further information on specific talks will be posted subsequently. Community comments
https://www.infoq.com/news/2010/10/osgice
CC-MAIN-2019-22
refinedweb
1,007
50.06
thanks - JSP-Servlet thanks thanks sir i am getting an output...Once again thanks for help thanks - Development process thanks thanks for sending code for connecting jsp with mysql. I have completed j2se(servlet,jsp and struts). I didn't get job then i have learnt.... please help me. thanks in advance thanks - JSP-Servlet thanks thanks sir its working Please help need quick!!! Thanks (); ^ 6 errors any help would be appreciated guys. Thanks so...Please help need quick!!! Thanks hey i keep getting stupid compile errors and don't understand why. i'm supposed to use abstract to test a boat race Thanks Thanks This is my code.Also I need code for adding the information on the grid and the details must be inserted in the database. Thanks in advance Need help in completing a complex program; Thanks a lot for your help Need help in completing a complex program; Thanks a lot for your help ... it? Thanks a ton for your help... no output. So please help me with this too. And also, I am using Runtime function Begineer Help Help Please Thanks A million ) I am using Jcreator Begineer Help Help Please Thanks A million ) I am using Jcreator System.out.println(" Income Statement"); System.out.print("Year jsp help - JSP-Servlet :// Thanks...jsp help In below code value got in text box using 'ID' Attribute ... I want to use that value in query to fetch related values in same page please help in jsp - JSP-Servlet please help in jsp i have two Jsp's pages. on main.jsp have some... data. here some data of Jsp's. main.jsp... please help. Hi Friend, Try Please Help - JSP-Servlet . Thanks for your response.. You told me that its difficult to handle the problem.. Ok,Let me try my hands on it.. The help i need is can u please tell me... for u, i will continue to next step. Thanks Sreenivas Help in Struts2 Help in Struts2 Hi, in struts 2 how to get the values from db and show in jsp page using display tag or iterator tag thanks in advance Help! thanks for ur code, I'd like to ask a question: after press "c" or "C", timer starts and after every delay ms, it'll print "This line... once. oh, I see my problem. Thanks Programming Help URGENT - JSP-Servlet TO ROSEINDIA TEAM< IF I GET THIS HELP IN TIME.... Thanks/Regards...:// Thanks...Programming Help URGENT Respected Sir/Madam, I am to the user) please tell me how could i design that. Please help me asap Thanks a lot ;"); } </script> </html> Thanks Hi Friend, If you want...; System.out.println(i+" "+sq); } } } Thanks help!!!!!!!!!!!!!!!!!! import java.awt.; import java.sql.*; import...+"',"+t6+",'"+t7+"','"+t8+"')"); JOptionPane.showMessageDialog(null,"Thanks...,"Thanks for creating an account."); } catch(Exception e){} } });   Help me Help me HI I am using Tomcat6.0 this is the problem i got wen i run.../jsp/login.jsp"); }else{ out.println("<html>... a clear resolution to this vexing issue? Thanks in advance. Hi Friend JSP help JSP help Hi i am using jsp,html and mysql. I am displaying one question & their 4 options per page. For that i have one label for question &... do this? Please Help Help me Help me HI I am using Tomcat6.0 this is the problem i got wen i run the code , i am using login.html ,login .jsp,login.java and web.xml code...;quot;/Simple/jsp/login.jsp"); }else{ out.println(" method returning null on JSP page.Plz HELP!!! method returning null on JSP page.Plz HELP!!! public String... to Java, so please help me as it could also because of some silly mistake. Thanks...); return sAddress; } When I am calling this method on the jsp page How to create own help forum - JSP-Servlet How to create own help forum Hi All, My client given requirements, they need to create their own help forum, I need to do case study for, how... give me suggestions. thanks in advance. -- Rajkumar Programming help Very Urgent - JSP-Servlet Programming help Very Urgent Respected Sir/Madam, Actually my code... Please please its very urgent.. Thanks/Regards, R.Ragavendran.. Hi friend, Read for more information, Help Very Very Urgent - JSP-Servlet /jsp/ Thanks. function trim(stringToTrim...Help Very Very Urgent Respected Sir/Madam, I am sorry..Actually... requirements.. Please please Its Very very very very very urgent... Thanks please help me. please help me. Please send me the validation of this below link. the link is Thanks Trin Java Programming Help with controls .. I Just want to use jsp and applet in my project . Please help me as soon as fast .. :( Thanks You In Advance... need your help to complete my java project. I going to make a site like youtube jsp jsp hi sir i have a problem in jsp sir please help me in this case... then, the first label value should be optional sir. please help me in this casae.its urgent please reply me soon thanks inadvance help me - JSP-Servlet help me how to open one compiled html file by clicking one button from jsp help on project - JSP-Servlet help on project Need help on Java Project JSP report program. Thanks Hi, You many also use Jasper Assistant from the Eclipse IDE. This will help you in generating the report xml file. Thanks...JSP Hi , I am working in JSP. In my project i have to generate my JSP JSP How to retrieve the dynamic html table content based on id and store it into mysql database? How to export the data from dynamic html table content to excel?Thanks in Advance.. Plz help me its urgent jsp . For more information, visit the following link: JSP Tutorials Thanks.... Apache Tomcat/5.0.28 please help me. its urgent....... Have you... api.jar file inside the lib folder. 4)Now create a jsp file:'hello.jsp' < Regular Expression Help Help HELP!!!! appreciated Thanks in advance!!!!! Please Help!! harish...Regular Expression Help Help HELP!!!! HI all I am very new to java... could help me to give a solution of how to retrieve the name and email add jsp , database: MS Access. Please kindly help me ASAP. Thanks Re:Need Help for Editable Display table - JSP-Servlet Re:Need Help for Editable Display table Hi Genius i need a help in jsp to display editable display tag. I able to show the datagrid in jsp... and edit it in the same grid.. Thanks in advance Eswaramoorthy Hi Java and JSP help Java and JSP help I have a website, on my website i have made a form which allows the user to enter a question, this question is then saved...;title>JSP Form</title> <style> </style> help me help me HI. Please help me for doing project. i want to send control from one jsp page to 2 jsp pages... is it possible? if possible how to do Editor help required. - Java Server Faces Questions Editor help required. I am having problem with editor used to design JSP page in JSF applications. I want to open JSP page with Web Page Editor... are they? Please Help me.... Thanks In advance help me help me how to print from a-z, A-Z with exact order using for loop? Thanks for all concern jsp code - JSP-Servlet jsp code Can anyone help me in writing jsp/servlet code to retrieve files and display in the browser from a given directory. Hi Friend, Try the following code: Thanks Looking for help Looking for help I am looking for a valid helper to solve flex 4 problems I have found after switching from flex 3. We would work directly on my code with team viewer in the beginning. Thanks for your help Paolo JSP Date & Time - JSP-Servlet JSP Date & Time Hi! How to compare a String with System time in JSP.... pls... help me... Thanks in advance jsp - JSP-Servlet jsp hai everybody how to pass vlaue from jsp to jsp give me one example i tried with like this i tried please anyone help me its very urgent thanks in advance Hi friend help me help me hi sir pls tell me how to use ajax form validatin from very simple way with two ot three text field pls help me use ajax in php my email id... visit the following link: Thanks JSP - JSP-Servlet /connect_jsp_with_mysql.shtml Thanks...JSP Hi! Thank you very much for your help regarding XML reading in JSP. I have a small doubt. Is it possible to write a database help please help please hi i am done with register application using jsps... file.. Or atleast help me with code here.. I tried checking session alive and all didn worked ...will be waiting Thanks in advance help - Struts help, thans! information: struts.xml HelloWorld.jsp... : Thanks  .../struts/struts2/struts-2-hello-world-files... friend, Read for more information. Thanks-Servlet JSP How to store selected combo value in a String variable.. in JSP without AJAX. pls.... help me.... Hi friend, Using Jquery to store selected combo value in jsp : JSP - JSP-Servlet :// Thanks...JSP This is the error which i got in my JSP page. "The value for the useBean class attribute SQLBean.DbBean is invalid." please help me java help? java help? Write a program, where you first ask values to an array with 6 integer values and then count the sum and average of the values in methods...("Average of Array Elements: "+average(array)); } } Thanks
http://www.roseindia.net/tutorialhelp/comment/95232
CC-MAIN-2014-10
refinedweb
1,598
84.68
Test.QuickCheck.Parallel Description A parallel batch driver for running QuickCheck on threaded or SMP systems. See the Example.hs file for a complete overview. Synopsis Documentation module Test.QuickCheck pRun :: Int -> Int -> [Test] -> IO ()Source Run a list of QuickCheck properties in parallel chunks, using n Haskell threads (first argument), and test to a depth of d (second argument). Compile your application with '-threaded' and run with the SMP runtime's '-N4' (or however many OS threads you want to donate), for best results. import Test.QuickCheck.Parallel do n <- getArgs >>= readIO . head pRun n 1000 [ ("sort1", pDet prop_sort1) ] Will run n threads over the property list, to depth 1000. pDet :: Testable a => a -> Int -> IO StringSource Wrap a property, and run it on a deterministic set of data
http://hackage.haskell.org/package/pqc-0.2/docs/Test-QuickCheck-Parallel.html
CC-MAIN-2018-05
refinedweb
130
66.03
Apollo-Angular 1.2 - using GraphQL in your apps just got a whole lot easier! Explore our services and get in touch. Check what’s new in Apollo Angular and how to get the full potential benefits of using Angular + GraphQL + TypeScript combined thanks to GraphQL-Code-Generator. We are very excited to announce a new version of Apollo Angular that dramatically improves and simplifies the usage of GraphQL with Angular. This version also adds production and scale related features, that our large Enterprise and production users had been asking for. TL;DR - Code generation for Apollo Angular - Query, Mutation, Subscription as an Angular service - Apollo Angular Boost - Testing tools Introducing Query, Mutation and Subscription as an Angular services Through almost two years of using GraphQL in Angular we gained a lot of experience, and learned how people use the library. With the current API, having query and watchQuery methods sometimes confused a lot of developers. For people who use Apollo for long time it’s obvious but we often get asked about differences between them and many newcomers are surprised. We decided to add a new approach of working with GraphQL in Angular. import { Injectable } from '@angular/core'; import { Query } from 'apollo-angular'; import gql from 'graphql-tag'; @Injectable({...}) export class FeedGQL extends Query { document = gql` query Feed { posts { id title } } ` } There are now 3 new simpler APIs: Query, Mutation and Subscription. Each of them allows to define the shape of a result & variables. The only thing you need to do is to set the document property, That’s it, and now you use it as a regular Angular service: import { Component } from '@angular/core'; import { FeedGQL } from './feed-gql.ts'; @Component({ … }) export class FeedComponent { constructor(feedGQL: FeedGQL) { feedGQL.watch() .valueChanges .subscribe(result => { // ... }) } } In our opinion, the new API is more intuitive and documents feels now like a first class-citizens. But it also opens up the doors for something wayyyyy cooler! Taking it to the next level As an Angular developer, you already understand how much power Typescript adds to your development — the Angular community took those capabilities to the next level with code generation, through things like schematics. The GraphQL community also took the concept of static type capabilities into new places — over the API and managing data automatically at runtime with the query language. While using GraphQL, Typescript and Angular and maintaining apollo-angular in the past 2 years we always keep thinking how can we get all those technologies closer to create something that is more powerful than the sum of its parts. GraphQL Code Generator for Apollo Angular We are pleased to announce a new set of tools that takes the GraphQL schema from the server and the query from the Angular component and generate everything in the middle for you! Just by consuming a static GraphQL schema and defining the data you need and its structure in a GraphQL Query, there is no need for you to write any Typescript! You already defined it, why writing it again? We will generate a strongly typed Angular service, for every defined query, mutation or subscription, ready to use in your components! How it works You create a .graphql file with a document that you want to use in a component: query Feed { posts { id title } } Next, you run the GraphQL Code Generator — Angular Apollo Plugin to generate types and angular services. Then you simply import and use it as a regular, Angular service. import { FeedGQL } from './generated'; GraphQL Code Generator takes query’s name, makes it PascalCased and adds GQL suffix to it. An example, “myFeed” becomes “MyFeedGQL”. See it here in action and play with it: To play with code generator try to clone this repository: Using Angular, Typescript and GraphQL in a coordinated way, gives us new level of simplicity and power for our developer experience: - Less code to write — no need to create a network call, no need to create Typescript typings, no need to create a dedicated Angular service -! - Tree-shakable thanks to Angular 6 More thanks to GraphQL. But we’ve just talked about one new feature in apollo-angular. there is more: - Testing utilities There were a lot of questions about testing Apollo components, so we decided to finally release something with a similar API to the one Angular’s HttpClientuses. Sergey Fetiskin wrote an article about it. - Apollo Angular Boost It’s hard for newcomers to get started with Apollo Angular. Inspired by Apollo Boost we decided to create an Angular version of it. Here’s an interactive example. - Create Apollo on DI level There is now an extra way to create Apollo Client. Instead of using Apollo.createinside of a constructor, you can provide settings on Dependency Injection level. Read the “Using Dependency Injection” chapter in docs. - GraphQL Subscription outside NgZone Apollo.subscribe accepts now a second argument in which you can enable running subscription’s callback outside NgZone. - Automatic Persisted Queries for Angular It’s now possible to use APQ with Angular’s HttpClient, just install this package. - Query and Mutation as a service on StackBlitz and GitHub - Query and Mutation — Step by step tutorial - Example: Apollo Angular Boost on StackBlitz - Apollo Angular repository - Documentation
https://the-guild.dev/blog/apollo-angular-12
CC-MAIN-2021-31
refinedweb
869
52.6
Introduction to Constructor and Destructor in Java The following article Constructor and Destructor in Java provides a detailed outline for the creation of constructor and destructor in Java. Every Programming language has this concept called constructor and destructor. Java is an object-oriented programming language. If you know the object-oriented concepts then it will be beneficial to you to understand it more clearly. A constructor is something that initializes objects and destructors are to destroy that initialization. Java has automatic garbage collection which used mark and sweep’s algorithm. What is Constructor and Destructor in Java? A constructor is used to initialize a variable that means it allocates memory for the same A constructor is nothing but automatic initialization of the object. Whenever the program creates an object at that time constructor is gets called automatically. You don’t need to call this method explicitly. Destructor is used to free that memory allocated while initialization. Generally, in java, we don’t need to call the destructor explicitly. Java has a feature of automatic garbage collection. Why do we Need Constructor and Destructor in Java? Constructor and destructor mostly used to handle memory allocation and de-allocation efficiently. Constructor and destructor do a very important role in any programming language of initializing and destroying it after use to free up the memory space. How Constructor and Destructor Works in Java A constructor is just a method in java. Which has the same name as the class name. The constructor method does not have any return type to it. Look at the following example for more clarity: class Employee { Employee() { } } If you see in the above example we have not given any return type like int or void to the method which has the same name as a class name. It is mainly used to initialize the object. When we are creating an object of a class at that time constructor get invoked. It will be more clear with the following code snippet. How to create Constructors and Destructors in java? Look at the following example class Employee { Employee() { //This is constructor. It has same name as class name. System.out.println(“This is the default constructor”); } } Types of Constructor There are two types of constructors depending upon the type we can add and remove variables. - Default Constructor - Parameterized Constructor With this, we are also going to see constructor overloading. 1. Default Constructor This is the one type of constructor. By default without any parameters, this constructor takes place. This constructor does not have any parameters in it. Example: Class Abc{ Abc(){ System.out.println(“This is the example of default constructor.”); } } 2. Parameterized Constructor As the name suggest parameterized constructor has some parameters or arguments at the time of initializing the object. Example: class Square{ int width,height; Square( int a , int b){ width = a; height = b; } int area(){ return width * height; } } class Cal{ public static void main(String[] args){ { Square s1 = new Square(10,20); int area_of_sqaure = s1.area(); System.out.println("The area of square is:" + area_of_sqaure); } } } Output: java Cal The area of the square is 200 Now, it is time to talk about constructor overloading in java. This means that having multiple constructors with different parameters. So with this, each constructor can do different tasks. Sometimes as per the requirement, we need to initialize constructors in different ways. Example public class Abc{ String name; int quantity; int price; Abc( String n1, int q1, int p1){ name = n1; quantity = q1; price = p1; } Abc( String n2, int p2){ name = n2; price = p2; quantity = price/10; } void display(){ System.out.println("Product Name"+ name); System.out.println("Product quantity is"+ quantity); System.out.println("Product price is:"+ price); } public static void main(String[] args){ Abc product1; product1 = new Abc("Dates",500,50); product1.display(); product1 = new Abc("cashu",800); product1.display(); } } Output: Product Name Dates Product quantity is 500 Product price is 50 Product Name cashu Product quantity is 80 Product price is 800 Try out the above program and you will be clear what exactly happening with constructor overloading. Destructor Before start talking about destructor let me tell you there is no destructor in java. Destructor is in C++ programming language. If we are talking about java then java has a feature called automatic garbage collector. Which free the dynamically allocated memory when there is no use. This concept is very important and you can explore more about this garbage collection in java. - Java uses the garb collection technique for memory allocation automatically. - There is no need to explicit use of destructors like C++. - For allocating memory in java we do not have malloc function like in C programming. - The same process of allocating memory is done by the new operator in java. - new keyword allocates memory space for an object on heap memory. - At the time of program execution, a new keyword allocates some memory space for the object. End-user need to worry about this as memory allocation is handled by the program. At the time when the object used in programs done with the work the memory used for the object is utilized for another task. This process of utilizing memory efficiently is the job of garbage collection in java. Let’s talk about destructor then. As we know there is no destructor in java as it has finalize() method to do so. The following are some of the key points to be noted. Finalize() Methods - Finalize method is work like destructor and opposite of constructor as we have seen earlier. - Generally, the finalize method is used to remove the object. - For using this method we have to explicitly define this method in java. - The finalize method starts working after garbage collection done with its work. - This simply means that after freeing memory space by deallocating memory space from objects there are chances that memory utilization still there with other things like fonts etc. to delete that memory space or to frees up that space we make use of finalize() method. Conclusion Constructor and destructor(garbage collection in java) are very important things to get clear in any programming language as this is the start where you can actually get how things are done at the background to manage memory space. Recommended Articles This is a guide to Constructor and Destructor in Java. Here we discuss the introduction to Constructor and Destructor, Why do we need it and how does it work in java along with an example. You may also look at the following articles to learn more –
https://www.educba.com/constructor-and-destructor-in-java/?source=leftnav
CC-MAIN-2020-34
refinedweb
1,093
56.66
This is a project I've actually had "lying around" for some time (at least in conceptual form) but time constraints prevented me from fully exploring it until now. Actually what sparked my interest was reading about the IPC Channel that's new in .NET 2.0 for Remoting, and how much faster it is than the HTTP or TCP Channels -- it's based on Named Pipes. What is "Named Pipes"? It's an interprocess control (IPC) protocol for exchanging information between two applications, possibly running on different computers in a network. Named Pipes are supported by a number of network operating systems, including Netware and LAN Manager, and of course, Windows. If it sounds vaguely familiar, remind yourself that SQL Server uses Named Pipes as one of it's primary transport mechanisms besides TCP. Why? Because it's a low-level operating system protocol, it's extremely fast, and it's reliable. Unfortunately, there has never been an Framework Class Library .NET implementation from Microsoft, even though there are .NET Framework unmanaged API wrappers for a lot of other things. Why, I have no idea; even the new .NET 2.0 implementation of Pipes seems to be irretrievably bound to the Remoting infrastructure (although to be fair, I haven't yet had the chance to really look into it). At any rate, these are well - defined API's and there are several nice sample projects on how to "wrap" them with the PInvoke layer in C#. If you are "C#-Challenged" and determined to do this in VB.NET, I can't help you, but you are certainly welcome to try. The best implementation (and the one I've used here, since it required the least amount of changing and enhancing) is the one by Ivan Latunov here. Now the first question a developer would want to ask is this: What do I need Named Pipes for? The answer is, "If you want to do IPC (Inter-Process) communication efficiently." IPC is when more than one process, either on the same computer or different computers on the network, need to be able to talk to each other. There are a number of ways we can do this in the .NET Platform, each of which has its advantages and drawbacks: 1) WebServices: A Central WebService can provide easy access to almost any type of .NET Program (even ASP.NET) - however, everything needs to go over the wire as XML and it depends (usually) on HTTP. 2) Remoting: Remoting offers a way to incorporate Inter-Process Communication and has a lot of flexibility, however it also involves some complex MarshalByRefObject semantics and a lot of serialization. 3) TCP / UDP Sockets: You can implement a socket server to handle the inter-process communication. This can be reasonably fast, but it typically involves some pretty sophisticated programming. (For an example, see here). 4) Using a Database: This might be equated to the SQL Server ASP.NET Session option. Persistence is a feature, but speed is not. The implementation I've cobbled together here is based on Named Pipes. It works, and it works well. Between processes on a single machine, you can expect to be able to store or retrieve some 10,000 managed objects in as little as 5 seconds. Between machines on a network, that could expand to as much as 40 seconds or more, depending on the size of the objects being cached and network latency. However, I've run this with a Named Pipe Pool of up to 40 Pipes with no problems. Everything is controlled by simple entries in the configuration files, and it doesn't use much memory compared to other solutions. Moreover, I've "componentized" the pieces of the solution into separate projects in such a way that it will be very easy for developers who want to "take it up a notch" to customize the solution to the needs of their enterprise. Finally, I've taken some effort to ensure that the usage model from a user-class standpoint is easy and does not involve a lot of code. Specifically, the semantics of the NPCache (Named Pipes Cache) class are very similar to the Indexer semantics you find in many collection classes: USAGE: Asembly References for a client: ClientHandler Payload Required using statements: using ClientHandler; using PAB.IPC; Private Cache Declaration in class that uses the Cache: private NPCache npCache = NPCache.Instance; CACHE ACTIONS: STORE: npCache[key] = object; Example: npCache["testDs"]=myDataset; RETRIEVE: Type value=(Type)npCache[key]; Example: DataSet ds = (DataSet)npCache["testDs"]; The operation of the NPCache Service is really not complex at all. When the service is started, it spins up a Hashtable of Named Pipe Connection objects that can wait for an incoming connection and become activated. One could call these "Pipe Listeners" in much the same way that a TCP or UDP Socket server might spin up "socket listeners". On the client side, we instantiate a Singleton instance of the NPCache class, which is esentially a Cache forwarding proxy: private static NPCache npCache = NPCache.Instance; This is synchronized so that all operations, either read or write, are threadsafe. In the middle, we have a Payload class that is used as a serializable container for whatever object we want to remote over Pipes to the server's Hashtable "cache" repository: using System; namespace PAB.IPC { public enum Action { Store, // insert to Hashtable Retrieve, // Retrieve from Hashtable Status, // This is a Status (returned) Payload Delete // Delete from Hashtabe } /// <summary> /// Payload is a serializable transport vehicle class /// </summary> [Serializable] public class Payload public string Key; public Object Body; public Action Action; public Payload(string key, object body, Action action) { this.Key=key; this.Body =body; this.Action =action; } } As can be seen, this class "holds" both the object, its Hashtable key, and an Action enum that tells the sending or the receiving end what the purpose of the container is. To Store an object into the Cache via Pipes, we can use familiar Indexer semantics like so: npCache[key] = ObjectInstanceToStore; The NPCache class has two methods for storing and retrieving; here is the store method: private void SetPayload(string key, Payload value) try { ClientHandler.Client c = new ClientHandler.Client(); Payload outP = new Payload(key,value.Body , Action.Store); Payload recvP = c.Send(pipeName,serverName,outP); if(recvP.Body.ToString()!="OK") recvP.Body="ERROR: Key Not Found"; } catch throw; The ClientHandler class serializes the Payload, instantiates a new Pipe connection, sends (and then receives back) the Payload: public Payload Send(string pipeName, string serverName, Payload message) IInterProcessConnection clientConnection = null; Payload retP=null; clientConnection = new ClientPipeConnection(pipeName, serverName); clientConnection.Connect(); byte[] bytesToSend = PAB.IPC.Utilities.PayloadToBytes(message); clientConnection.WriteBytes(bytesToSend); byte[] returnBytes = clientConnection.ReadBytes(); retP =PAB.IPC.Utilities.BytesToPayload(returnBytes); clientConnection.Close(); catch (Exception ex) { retP=new Payload(message.Key,ex.Message,Action.Status); finally clientConnection.Dispose(); return retP; A return instance of the Payload class, which can be a container for either a result object or just the "status" of the operation, is sent back in exactly the reverse of the Send operation. Note that if an exception is thrown, it's message property is populated into the return Payload body so it can be inspected by the caller. There's more to it, of course, but that's how it all works, in a nutshell. In your particular operations, it may turn out that you really don't want a Hashtable. Perhaps you have an operation where various clients need to get the "Most recent X" items, similar to the SQL Statment "Select TOP 10 X from Y". In that case, you would simply replace the Hashtable with a Queue class, and your Payload object, instead of having a "Key", might have a "HowMany" field. You would simply read this information on the server side, Dequeue the correct number of objects from the Queue, package them up into an array of "X" -es, and Payload them back over the pipe to the requestor. So, it's a very flexible arrangement. When you download the Visual Studio 2003 Solution below, you will see that in debug mode, my Windows Service actually runs not as a service, but as a "plain old" executable, which makes debugging incredibly easy. You simply start the Service project, and then start up one of two client test apps, either a Windows Forms Client class that let's you put in a key and a value, or the Speed Test app that lets you do some timings for heavy load testing. Be sure to look at the two config files for the Service and for each Client; they should be self-explanatory. You can use "." (a dot) for the server element for work on the same machine, or a named machine if working between machines. The service would normally always listen on "." (itself). In release build mode, the Windows Service must be installed via the install.bat or uninstall.bat files included, which are also in the MSI Installer project. There is complete CHM format documentation and a built installer for the Windows Service, as well as copies of Ivan's original articles on his original implementation of the Named Pipes IPC process that I used here. If you have ideas, recommendations, or requests, feel free to post them on the discussion forum at the bottom of this article and I'll try to address these and possibly enhance this article and the download as time goes on. This can be converted "in place" to Visual Studio 2005. If you do this, you will want to change the Hashtable to a Generic List object of type Payload to take advantage of the performance enhancements available with Generics. To implement this one could use a Generic Dictionary, which is the Generic class analogous to the Hashtable. This would require only the changing of a single line of code: public static Dictionary<string, Payload > htCache= new Dictionary<string, Payload>(); Download the Visual Studio 2003 Solution that accompanies this article
http://www.nullskull.com/articles/20060404.asp
CC-MAIN-2014-15
refinedweb
1,667
52.29
I have created several custom Dynamic properties for a Win control. They are written to .config file. config file is a XML based file. I could edit that file in a Text Editor and the changed values shows up in the app. There are functions to read from config file. There are functions in System.configuration namespace to write to config file. Does anybody know a workaround for this? I exploring system.xml namespace classed for possible workaround. I am new to XML. Thanks AK Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?55077-Need-to-Convert-a-Hex-string-to-a-Hex-number&goto=nextnewest
CC-MAIN-2018-17
refinedweb
103
62.95
This module defines functions and classes which implement a flexible error logging system for applications. Logging is performed by calling methods on instances of the Logger class (hereafter called loggers). Each instance has a name, and they are conceptually arranged in a namespace/logging_rotatingfile_example.out, and each time it reaches the size limit it is renamed with the suffix .1. Each of the existing backup files is renamed to increment the suffix (.1 becomes .2, etc.) and the .5 file is erased. Obviously this example sets the log length much handler is responsible for sending messages of a specific severity to a specific location. The standard library includes quite a few handler types; this tutorial: Application code should not directly instantiate and use handlers. Instead, the;”. New in version 3.1. The NullHandler class was not present in previous versions, but is now included, so that it need not be defined in library code.: New in version 3.1. The NullHandler class was not present in previous versions... Logs a message with level DEBUG. the old Python string formatting - see section Old “extra” key in the keyword argument whose value is the dict-like object passed to the constructor. Of course, if you had passed an “extra” keyword argument in the call to the adapter, it will be silently overwritten. The advantage of using “extra” “dict cPick cPick... New in version 3.1. The NullHandler class, located in the core logging package, does not do any formatting or output. It is essentially a “no-op” handler for use by library developers. Returns a new instance of the NullHandler class. “. need Old String Formatting Operations for more information on string formatting. Currently, the useful mapping keys in a LogRecord are: Returns a new instance of the Formatter class...(). To send a configuration to the socket, read in the configuration file and send it to the socket as a string of bytes preceded by a four-byte length string packed in binary using struct.pack('>L', n).") “application”. It was()
https://docs.python.org/3.0/library/logging.html
CC-MAIN-2017-43
refinedweb
338
56.96
Lead Image © cepixx, 123RF.com Safeguard and scale containers Herding Containers Since the release of Docker [1] three years ago, containers have not only been a perennial favorite in the Linux universe, but native ports for Windows and OS X also garner great interest. Where developers were initially only interested in testing their applications in containers as microservices [2], market players now have initial production experience with the use of containers in large setups – beyond Google and other major portals. In this article, I look at how containers behave in large herds, what advantages arise from this, and what you need to watch out for. Herd Animals Admins clearly need to orchestrate the operation of Docker containers in bulk, and Kubernetes [3] (Figure 1) is a two-year-old system that does just that. As part of Google Infrastructure for Everyone Else (GIFEE), Kubernetes is written in Go and available under the Apache 2.0 license; the stable version when this issue was written was 1.3. The source code is available on GitHub [4]; git clone delivers the current master branch. It is advisable to use git checkout v1. 3.0 to retrieve the latest stable release (change v1. 3.0. to the latest stable version). If you have the experience or enjoy a challenge, you can try a beta or alpha version. Typing make quick-release builds a quick version, assuming that both Docker and Go are running on the host. I was able to install Kubernetes within a few minutes with Go 1.6 and Docker 1.11 in the lab. However, since version 1.2.4, you have to resolve a minor niggle by deleting the $CDPATH environment variable using unset CDPATH to avoid seeing error messages. What is more serious from an open source perspective is that parts of the build depend on external containers. Although you can assume that all downloaded containers come from secure sources – if you count Google's registry to be such – the sheer number of containers leaves you with mixed feelings, especially in high-security environments. A build without preinstalled containers shows that it is possible to install all the components without a network connection, but the Make process fails when packaging the components for kubi-apiserver [5] and kubelet [6]. For a secure environment, you might prefer to go for a release that uses only an auditable Docker file-based repository. (See also the "Runtimes and Images" box.) Runtimes and Images Docker is just one of several existing container runtimes, some of which differ considerably and some in important details, such as the process layout below the Docker daemon. With regard to separation of concerns, Systemd [8] acts as a centralized resource management instance and has been discussed by the rkt project [9]. The system features that container runtimes should and are allowed to assume is controversial. If you ask Lennart Pöttering, container runtimes handle generic Systemd tasks, and since version 230 [10], also moves processes to namespaces, thanks to Nspawn, and limits resources. Systemd could thus replace the runtimes. What is missing are the push and pull functions to pick up and drop off images in the registries. Cluster To Go After the install, you can set up a test environment in the blink of an eye: (1) select a Kubernetes provider and (2) fire up the cluster: export KUBERNETES_PROVIDER=libvirt-coreos cluster/kube-up.sh After a few minutes, you should have a running cluster consisting of a master and three worker nodes. Alternatively, the vagrant provider supports an installation under VirtualBox on Macs [7]. All Aboard Kubernetes's plan is to contain all the components required to create your own PaaS infrastructure out of the box. It automatically sets up containers, scales them, self-heals, and manages automatic rollouts and even rollbacks. To orchestrate the storage and network, Kubernetes uses storage, network, and firewall providers, so you first need to set these up for your home-built cloud. If you want to build deployment pipelines, Kubernetes helps with a management interface for configurations and passwords and supports DevOps in secure environments with complaint – and without a password if you have more than one configuration in a repository. Kubernetes promises – and it is by this that it must be judged – that you will no longer need to worry about the infrastructure, only about the applications. There is even talk of ZeroOps, an advancement on DevOps. Of course, that will still take a long time. Ultimately, it is just like any other technology: For things to look easy and simple, someone needs to invest time and money. Buy this article as PDF (incl. VAT)
https://www.admin-magazine.com/Articles/Safeguard-and-scale-containers
CC-MAIN-2020-45
refinedweb
776
52.9
Talk:Japan From Progteam Latest revision as of 17:50, 25 March 2008 Ok, so I'm not quite sure I follow the idea given below. I've taken a different route, which isn't as efficient, but was easier for me to understand. By modifying mergesort I was able to do inversion counting in O(nlogn) time, which is now fast enough for it to run to completion and tell me that my answer is wrong on PKU. This has been quite frustrating since my code works on all the inputs I've made up so far. Would it be OK to post my code up here and see if others have an input that can break it? --Jinho 22:58, 22 March 2008 (EDT) For easy/fast coding, one way to use data structures is to take a general data structure and specialize it to take advantage of the regularity in the data. So rather than build a general search tree, use the fact that the keys are, say, integers between 0 and m-1. Use, for example the perfect balanced binary tree given by a heap. Suppose 2^k is the first power of 2 that is greater than or equal to m. Let the leaves of the tree be A[2^k .. 2^k+m-1]. Each location in A[0..2^k+m-1] is intialized to zero. The counts at each node are the number of items in the subtree rooted by the node. The parent of a node j is j/2 (why? answer: why not? So A[0] is wasted. So what. Root A[1] is not needed either.) To insert key j, write: procedure insert(j) loc <- j + twotothek repeat A[loc] <- A[loc] + 1 loc <- loc/2 until loc=0 (Okay, the zero could be a one. So what.) Delete is no different. To find how many entries are less than j use: function lessthan(j) temp <- 0 loc <- j + twotothek repeat if odd(loc) then temp + A[loc-1] endif loc <- loc/2 until loc=1 return temp The code is fast, easy to read, easy to write. --Siegel 11:26, 22 March 2008 (EDT) This problem is a data structures/sorting question. For each road (i,j), how many roads (k,l) with index k<i have destination l>j? So the idea is: Step 1. Build a data structure to return the number of pairs (k,l) where k<i. Enter all of the roads into it). Step 2. As Jason suggested, process the roads for j = 1 2,3, . . .(as the main key and then the secondary key counts also increasing). Compute the count for the first (i,j), which is just the number of roads (k,l) where k<i. Then remove road (i,j) so that the j you process next is always the smallest. Use an efficient data structure. So the idea is: for each road (k,l) insert the the record (primary key k, secondary key l). Use standard tricks to compute the number of entries with key count less than k. Enter all such roads. Now greedily sequence through the roads (i,j) by the order (j=primary key, i=secondary key) and compute count the number of entries (k,l) with k<i. Then remove (i,j). The counts give the intersections. This was a procedural description of an algorithm that runs in r log r time, where r is the number of roads. Classifications: sorting, inversion counting. Also enhanced search trees. Fast (in code design) implementations of enhanced search trees (over pairs of records with keys that are integers in [0,n]). --Siegel 19:43, 14 March 2008 (EDT) Hmm. interesting interval problem. Do you think a greedy approach might work well? --Siegel 14:05, 26 February 2008 (EST) Did you try divide and conquer? Here is an equivalent problem (why?) that is kind of nice (it will appear somewhere, sometime, and probably already has). The data is a list of n horizontal intervals on the x-axis. You can assume that the endpoints are integers, if you like. The interval [a,b] contains [c,d] if a<=c and b<=d. you can assume that all intervals are distinct. Find the number of pairs where one contains the other. I guess a problem variant is to define the other relationships (incomplete overlap) and (disjoint) and count those as well. --Siegel 02:38, 26 February 2008 (EST) I added a lookup table, but it's still too slow. Any ideas? -- Melanie Bold text
http://cs.nyu.edu/~icpc/wiki/index.php?title=Talk:Japan&diff=prev&oldid=6799
CC-MAIN-2014-15
refinedweb
764
83.76
This is the mail archive of the cygwin-patches@cygwin.com mailing list for the Cygwin project. > > >Perhaps you would prefer this better. I changed the ifdef to be > > >feature-centric as opposed to project-centric. Perhaps this is a little > > >more to your liking? > > > <snip> > > > I don't understand why the project needs a non-standard definition for > > HANDLE unless they are using it in some other context than the Windows > > one. It is defined as a void * in my MSVC headers so why would > > anyone try to use anything else? > > > qt is a multiplatform framework and they defines HANDLE different on different > platforms. > (the sample code is from qt-3, but qt-2 uses the same strategy). > In short the windows, mac, and qt embedded releases uses "void *" the x11 > release uses "ulong" > > qt-2/src/kernel/qwindowdefs.h > > indo > wdefs.h?rev=1.1.1.1&content-type=text/vnd.viewcvs-markup > > > qt-3/src/kernel/qnamespace.h > ames > pace.h?rev=1.1.1.1&content-type=text/vnd.viewcvs-markup > > <snip> > // "handle" type for system objects. Documented as \internal in > // qapplication.cpp > #if defined(Q_WS_MAC) > typedef void * HANDLE; > #elif defined(Q_WS_WIN) > typedef void *HANDLE; > #elif defined(Q_WS_X11) > typedef unsigned long HANDLE; > // qt embedded > #elif defined(Q_WS_QWS) > typedef void * HANDLE; > #endif > > The problem is now, that we are compiling under Q_WS_X11, which defines HANDLE > as ulong, but we are using the w32api too, which defines HANDLE as void * > (because we are using some native win32 functions), so we have the choose. I've > tried the void * definition, but in the X11 context this results many casting > problems, which produces very bad compatibility problems with the original > source, so I've choosed the x11 handle context, which causes the know trouble > with the w32api winnt.h header. So we patched the winnt header, which gives us > much less headache :-) > > > I do prefer feature-centric ifdefs, but I don't think that adding this > > particular definition of HANDLE to the windows headers makes sense. > > I think too, but you have another solution yet. :-). Chris asked the question why are we using a mixed Cygwin/X and native Windows environment. One example of where this is necessary is in a Cygwin specific extension to KDE which shows the Windows drives in the KDE file dialog and Konqueror. This code needs to get a list of drives on the system, however a readdir on /cygdrive only shows the list of drives with disks currently in them (i.e. removable drives without disks don't show up). Therefore we use GetLogicalDriveStrings in the code and include the windows.h header, as well as various KDE headers, which in turn include Qt headers with the conflicting definition of HANDLE. There are other instances as well, where we have written native implementations for certain Qt classes. Chris
https://sourceware.org/legacy-ml/cygwin-patches/2002-q3/msg00171.html
CC-MAIN-2022-21
refinedweb
474
54.52
File::Util - Easy, versatile, portable file handling version 4.161950 File: } ); You can do much more with File::Util than the examples above. For an explanation of all the features available to you, take a look at these other reference materials: The File::Util::Manual::Examples document has a long list of small, reusable code snippets and techniques to use in your own programs. This is the "cheat sheet", and is a great place to get started quickly. Almost everything you need is here. The File::Util::Manual is the complete reference document explaining every available feature and object method. Use this to look up the full information on any given feature when the examples aren't enough. The File::Util::Cookbook contains examples of complete, working programs that use File::Util to easily accomplish tasks which require file handling. #. If. File:.) File:. Exports nothing by default. File::Util fully respects your namespace. You can, however, ask it for certain things (below). The ); :all (imports all of @File::Util::EXPORT_OK to your namespace) :diag (imports nothing to your namespace, it just enables diagnostics) You can use these tags alone, or in combination with other symbols as shown above. File::Util only depends on modules that are part of the Core Perl distribution, and you don't need a compiler on your system to install it.. To install this module type the following at the command prompt: perl Build.PL perl Build perl Build test sudo perl Build install On Windows systems, the "sudo" part of the command may be omitted, but you will need to run the rest of the install command with Administrative privileges Send bug reports and patches to the CPAN Bug Tracker for File::Util at rt.cpan.org If you want to get help, contact the authors (links below in AUTHORS section) I fully endorse as an excellent source of help with Perl in general. The project website for File::Util is at The git repository for File::Util is on Github at Clone it at git://github.com/tommybutler/file-util.git This project was a private endeavor for too long so don't hesitate to pitch in. The following people have contributed to File::Util in the form of feedback, encouragement, recommendations, testing, or assistance with problems either on or offline in one form or another. Listed in no particular order: Tommy Butler Others Welcome!. This disclaimer applies to every part of the File::Util distribution. The rest of the documentation: File::Util::Manual, File::Util::Manual::Examples, File::Util::Cookbook Other Useful Modules that do similar things: File::Slurp, File::Spec, File::Find::Rule, Path::Class, Path::Tiny
http://search.cpan.org/~tommy/File-Util/lib/File/Util.pm
CC-MAIN-2017-51
refinedweb
445
53
Because shared files are widely distributed across networks, administrators face growing problems as they try to keep users connected to the data they need. The Distributed File System (Dfs) in the Microsoft Windows® 2000 operating system provides a mechanism for administrators to create logical views of directories and files, regardless of where those files physically reside in the network. Fault tolerance of network storage resources is also possible using Dfs. Introduction Using the Dfs Administrator Tool Because the network. Fault tolerance of network storage resources is also possible using Dfs. This guide describes how to use the Dfs Share Creation wizard. The examples provided in this document assume you have already configured the Microsoft Active Directory® service, and have administrator permissions for both the domain and the server where you will be configuring Dfs. You can create the base configuration by following the Step-by-Step Guide to a Common Infrastructure for Windows 2000 Server Deployment before beginning this document. If you are not using the common infrastructure, you need to make the appropriate changes to this instruction set. This step-by-step guide describes how to use the Dfs Administrator snap-in. Installation of the Dfs service takes place automatically during Windows 2000 Server Setup. However, you must configure Dfs in order for a Dfs share to be accessible to clients. Perform these steps on the domain controller while logged on as a user with administrative privileges. In the Windows 2000 operating system, Dfs can integrate with Active Directory to create fault tolerant Dfs roots on Windows 2000 domain controllers (DCs) and member servers. If you have multiple servers in your Windows 2000 domain, any or all participating servers can host and provide fault tolerance for a given Dfs root. Active Directory is used to ensure domain controllers in the domain share a common Dfs topology, thus providing redundancy and fault tolerance. Alternatively, you can create a stand-alone Dfs server, which does not take advantage of Active Directory and does not provide root level fault tolerance. A DC can host a single Dfs root, and you can have an unlimited number of Dfs roots in each domain. Up to 32 DCs can host the same root. Multiple Dfs root volumes can be hosted in the domain. Additional computers hosting the root or child nodes (links) improves load balancing, fault tolerance, and site preference to directory service-aware network clients. Dfs links below the root can reside on any UNC path accessible to the Dfs server and clients. In this walkthrough, it is assumed you are creating a fault-tolerant Dfs Root. Click Start, point to Programs, point to Administrative Tools, and then click Distributed File System. Right-click Distributed File System in the left pane, and click New Dfs Root. The Create New Dfs Root wizard appears, then click Next. Make sure that Create a domain Dfs root is selected, and then click Next. Select the host domain for the Dfs root; in our example, this is reskit.com, and then click Next. Accept the name of the host server for the Dfs root. In our example, this is displayed as HQ-RES-DC-01.Reskit.com. Click Next Choose the local share point to be used on the target to host the Dfs root. In our example, click Create a new share and type the path to share as c:\dfsbooks and the share name as books. The snap-in lets you create both a new share and new directory if they do not already exist. Figure2 Select the share for the Dfs root volume Click Next. If the specified folder does not exist, you are asked if you want to create it. Click Yes to continue. Add a comment if you wish to further describe this root. Click Next. Click Finish to create the Dfs root. After the Create New Dfs Root wizard has completed, you are ready to administer your Dfs root. If you have multiple domain controllers hosting a fault tolerant Dfs root, keep in mind that fault tolerant Dfs uses Active Directory to store topology knowledge. Thus, it is necessary for the topology knowledge to converge between the domain controllers. Updates to the Dfs configuration initially take place on the host server in the Windows 2000 domain. Domain controllers may have a different view of the Dfs configuration until multimaster replication from the Dfs host makes changes fully replicated between all domain controllers in a domain. Dfsthe root and all its linksis stored as a single entity known as a blob. When a change is made to the blob, the whole blob replicates until consistent throughout the domain. This takes about five minutes between any given two replicating domain controllers in the same site, and at least 15 minutes if the domain controllers are in different sites. Until convergence occurs, Dfs administrator tools located on different Dfs clients can be presented with a different Dfs configuration. You can click Refresh to update Dfs with the current configuration from the Dfs host. At this point, you have an empty Dfs root in Active Directory. For this share to be interesting to users, you need to publish non-local shares in the Dfs namespace. To publish non-local shares Right-click your Dfs Root name and then click New Dfs Link. Right-click \\Reskit.com\Books. Click New Dfs Link. Specify a directory for the link name. In this example, call the link name ART. Locate a valid Windows 2000 share anywhere on your network, and type the full universal naming convention (UNC) name in the Send the user to this network path box. Alternatively, you can browse for it. In our example, this is the Architecture share on the BR3-VAN-SRV-01 server in the Vancouver domain. (Note: these shared folders were pre-created for this exercise.) Click OK. You can optionally specify a comment and a time-out value. The time-out value is the number of non-use seconds that individual clients have to cache the referral, after which they must retrieve a fresh referral from one of the hosting Dfs servers. If there are multiple servers to configure (for example, two servers host identical information, one in Hartford, the other in Seattle), you can add to this replica set. To do this, highlight the junction, right-click it, and click New Replica. Browse to the Reskit\BR2-RES-SRV-01\Engineering Diagrams folder and click OK. Click OK again. Right-click the juncture and click Replication Policy. Select each shared folder and click the Enable button; then click OK. Note: For replication to be enabled, the shares for the Dfs root or link must reside on an NTFS 5.0 formatted partition on a Windows 2000 domain controller or member server. The Primary flag marks the specified servers' files and folders as authoritative the first time replication takes place, after which normal multimaster replication takes place. The DFS root snap-in now looks like the one illustrated below. Any user of Windows 2000 logged on to your domain can now access the fault tolerant Dfs. Assuming they have proper access privileges, they can negotiate the individual junctions by using the following commands. Click Start, click Run, type cmd into the Open box, and click OK. Then type: NET USE driveletter: \\your domain name\your Dfs share name In the example used in the document, the command would be: NET USE J: \\RESKIT.COM\BOOKS J: DIR In a production environment, this alternate drive could reside on another server or on a user's workstation. Any user accessing the fault tolerant share would be able to continue to work uninterrupted. Scheduled file server maintenance, software upgrades, and other tasks that normally require taking a server off-line can now be accomplished without user disruption. To access the Dfs root using Windows Explorer Click Start, click Run, and type \\reskit.com\books in the Open box. Click OK. Click the DFS tab in Windows Explorer to view: The list of servers backing the Dfs root or link. The specific server the Dfs client is connected to. The Clear History function, which flushes the Partition Knowledge Table (PKT) to obtain a new one the next time that part of the Dfs namespace is accessed. You can also turn off one of the two servers and access the same Dfs path. This will show the failover that occurs when a server in the Dfs namespace becomes unavailable. Note that this takes place for fault tolerant roots and child nodes that are backed by more than one server. Note: Regarding Microsoft Cluster service, at present, Dfs supports Microsoft Cluster service using machine-based Dfs only. You cannot create fault tolerant Dfs topologies on systems running Microsoft Cluster service. If you are using fault tolerant Dfs where multiple domain controllers exist, it is important to consider that the Dfs configuration requires time to converge between domain controllers in the domain. For immediate replication, install and use the REPLMon tool that is found in the support\tools directory of the Windows 2000 Server product CD-ROM. Dfs-aware clients using earlier versions of the operating system (such as Microsoft Windows NT® 4.0) are not able to connect with fault tolerant Dfs roots. They can, however, connect directly to individual Dfs roots that participate in a fault tolerant Dfs. To do this, substitute the machine name for the domain name in the above Net Use command. Windows NT–based workstations browsing Dfs can also verify what physical storage they are referencing by viewing the Dfs tab available in System Properties found in Windows Explorer. Note: Most administrative functions can be performed from the command line or scripted using DFSCMD.EXE. Type DFSCMD /? for online Help. You can later modify the properties of this object. You can also publish your fault tolerant Dfs root as a shared folder in the directory service, and then access it using any directory service browsing tools. From the Active Directory Management snap-in, select your domain, right-click New, Volume. Fill in the appropriate information.
http://technet.microsoft.com/en-us/library/bb727150.aspx
crawl-002
refinedweb
1,683
63.59
On Wed, Jan 16, 2008 at 12:44:00PM +0100, Frans Pop wrote: > Until I read this paragraph I was thinking that the initial proposals should > maybe be announced on d-d-a so that at least all DDs could be informed > about the proposal. I'm still unsure whether that would not be a good idea > or not. > Even if d-project is considered a better list for these announcements, the > announcement should at least also mention where the DEP will be discussed. In the current proposal d-project is only used as a global mutual exclusion tool to get the next available number in the DEP namespace, no other announcement purpose is intended for d-project. The underlying overall idea is that the DEP process should not change the way in which the discussions are carried on in Debian, but just give a tool to keep track of what is happening / has happened. So, the announcement device to be used is the one that the driver would have used if the DEP process did not exist at all. The less constraints added by DEPs, the better :-) If people find that such a bootstrap announcement is needed we can go for it, but given that an automatic publishing system would exists for the DEP, we can even subscribe d-d-a to a RSS feed of the DEP page or something like that, but maybe is too early for
https://lists.debian.org/debian-project/2008/01/msg00069.html
CC-MAIN-2014-15
refinedweb
240
63.46
ooOOpsie sorry cedet-semantic'ers Mistakenly sent that to Eric only. Here it is. =2D--------- Forwarded Message ---------- Subject: Re: [cedet-semantic] semanticdb-create-system-database saving blank Date: January 27, 2005 11:48 pm =46rom: Fran=E7ois-Denis Gonthier <neumann@...> To: "Eric M. Ludlam" <eric@...> On January 27, 2005 06:49 pm, you wrote: Oops! At first I thought I would send a positive message because semanticdb-create-system-database does scan files for that now but after the database for /usr/include/X11 is saved, I still get an empty database. I tried it twice and got the same result. On the other hand, smaller directories, like /usr/include/X11/Xft are parsed and the tags are saved. It looks like there is a timing factor coming into account. I tried to use the Xft database (not that I know what Xft is!) and I still can't get completion. #include <X11/Xft/Xft.h> int main() { Xft /* Can't get anything there */ } > The semanticdb script was not meant for system files. It seems to me > that it would be useful to to allow systemdb scanning from the > command line also. I noticed that fact the night after I sent the bug report. For some reason, I thought it was for systemdb. =2D------------------------------------------------------
https://sourceforge.net/p/cedet/mailman/message/3566652/
CC-MAIN-2018-17
refinedweb
214
75.5
#include <hallo.h> * Mario Lang [Fri, Sep 20 2002, 12:58:11AM]: > > I modified the boot-floppies to make it possible to install on a system > > having only an external USB floppy drive. > > Thanks alot! I used this method today to install my new JVC MP-XP7210DE, > and it worked more or less flawlessly. Nice. > What didn't work was the creation of a boot-floppy before restarting > into the base-system, but I luckily didn't need it :). Oh, there is a symlink fd0->sda on the installation fs, but not on the target filesystem. Will be fixed if I rebuild the thing. > > > > > >. The problem is: some people told me that there are Cardbus cards that are driven by the drivers of the equal PCI cards. Including that drivers into the kernel would make it impossible to load the driver after PCMCIA start. Gruss/Regards, Eduard. -- >Actually, Microsoft is sort of a mixture between the Borg and the Ferengi. Stimmt doch eh nicht, die letzten beiden wissen wenigstens, wie man etwas baut!
http://lists.debian.org/debian-boot/2002/09/msg00578.html
CC-MAIN-2013-48
refinedweb
173
70.13
Opened 3 years ago Last modified 3 years ago #25947 new Bug Query's str() method fails when 'default' database is empty Description (last modified by ) According to the docs, we can have default database...[with]...parameters dictionary...blank if it will not be used. However, when trying to print a query with something like: print Question.objects.all().query you get the error that settings.DATABASES is improperly configured. Please supply the ENGINE value. even though the query itself can return results. You can replicate this by creating a new project, creating a router that routes everything to a test database like so: class Router(object): def db_for_read(self, model, **hints): """ Reads go to a randomly-chosen replica. """ return 'test' def db_for_write(self, model, **hints): """ Writes always go to primary. """ return 'test' def allow_relation(self, obj1, obj2, **hints): """ Relations between objects are allowed if both objects are in the primary/replica pool. """ return True def allow_migrate(self, db, app_label, model=None, **hints): """ All non-auth models end up in this pool. """ return True # Database # DATABASE_ROUTERS = ['test123.settings.Router'] DATABASES = { 'default': {}, 'test': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } Create a simple model like this one: from django.db import models # Create your models here. class Question(models.Model): question_text = models.CharField(max_length=200) pub_date = models.DateTimeField('date published') and run the appropriate migrations on the test database. Then attempting to print the query will fail, but the query itself will work. I believe the error is because the sql_with_params method in django.db.models.sql.query forces the uses of the DEFAULT_DB_ALIAS: def sql_with_params(self): """ Returns the query as an SQL string and the parameters that will be substituted into the query. """ return self.get_compiler(DEFAULT_DB_ALIAS).as_sql() Change History (2) comment:1 Changed 3 years ago by comment:2 Changed 3 years ago by I looked into this a bit and Django seems to crash when running quote_name_unless_alias in django.db.models.sql.compiler because it uses DatabaseOperations' quote_name function. Could changing the DatabaseOperations for django.db.backends.dummy to have a quote_name function like that of sqlite? I'm thinking of something like: def quote_name(self, name): if name.startswith('"') and name.endswith('"'): return name # Quoting once is enough. return '"%s"' % name I guess this might be tricky to fix for reasons similar to this comment in the file: "We need to use DEFAULT_DB_ALIAS here, as QuerySet does not have (nor should it have) knowledge of which connection is going to be used."
https://code.djangoproject.com/ticket/25947
CC-MAIN-2019-04
refinedweb
420
59.09
Features/Sugar3 Docs/How To Write This will guide you through how to write docs for a module in sugar3. There are 5 easy steps to writing the documentation, and 3 bonus steps for getting it merged. Example patch: or How To Guide 1. Write a code example for using the code. Try to write a stand alone python script that demonstrates the major features of the module and put in the examples directory in sugar-toolkit-gtk3. If it can not be demonstrated standalone, write a code sample that the user could paste into their activity, like the sugar3.graphics.alert example. When writing the example, add comments to explain what code is doing that might be non obvious (eg. that something is zero indexed, or that this function needs to be called first) 2. Write a blurb (a small paragraph, around 3 lines) for the module and include your example. Your blurb should say what this does and where to use it. Place this between the license and the imports. Including the example will always use the syntax .. literalinclude:: ../examples/SOMETHING.py. For example: ''' The combobox module provides a combo box; a button like widget which creates a list popup when clicked. It's best used outside of a toolbar when the user needs to choose from a *short* list of items. Example: .. literalinclude:: ../examples/combobox.py ''' 3. Document each class. Write a blurb about what the class does and where to use it. If the constructor takes any parameters, or if the class has any signals, they need to be documented here. Place this directly after the class definition. For example: class PaletteMenuItem(Gtk.EventBox): ''' A palette menu item is a line of text, and optionally an icon, that the user can activate. The `activate` signal is usually emitted when the item is clicked. It has no arguments. When a menu item is activated, the palette is also closed. Args: text_label (str): a text to display in the menu icon_name (str): the name of a sugar icon to be displayed. Takse precedence over file_name text_maxlen (int): the desired maximum width of the label, in characters. By default set to 60 chars xo_color (:class:`sugar.graphics.XoColor`): the color to be applied to the icon file_name (str): the path to a svg file used as icon accelerator (str): a text used to display the keyboard shortcut associated to the menu ''' 4. Document the class methods and global functions. You should document what the method/function does and any side effects that it has. If it takes arguments, the they must have their types and function documented. Same goes for return value. This should be placed after the definition. For example (the following are 2 separate methods from different classes): def set_image(self, icon): ''' Sets the icon widget. Usually this will be a :class:`sugar3.graphics.icon.Icon`. Args: icon (:class:`Gtk.Widget`): icon widget ''' self._hbox.pack_start(icon, expand=False, fill=False, padding=style.DEFAULT_PADDING) self._hbox.reorder_child(icon, 0) def get_value(self): ''' The value of the currently selected item; the same as the `value` argument that was passed to the `append_item` func. Returns: object, value of selected item ''' row = self.get_active_item() if not row: return None return row[0] 5. Proofread. Use the 'make-doc.sh' script to build the docs. Then cd doc/_build/html and start a http server (python -m SimpleHTTPServer 8000). Load localhost:8000 in your favorite web browser and check that your documentation has rendered correctly and makes sense. Make sure you have a spell checker in your text editor (the vim one in very nice as it only checks strings and docstrings), because I can't spell or check. SAMdroid (talk) I'm being hypocritical saying this, but I'll proof it soon :) 6. Write a commit message like Write documentation for sugar3.path.module. 7. Create a pull request on GitHub. 8. Listen to whoever reviews your patch. Syntax Information The docgen uses sphinx. Sphinx uses reStructuredText as a markup. We then use autodoc and napoleon to write docstrings using google style. Napolean quick intro: reStructuredText primer: How do I do X in sphinx?: I'm too lazy to find docs. Just duckduckgo or google "sphinx X" if you haven't already.
http://wiki.sugarlabs.org/go/Features/Sugar3Docs/HowToWrite
CC-MAIN-2020-45
refinedweb
712
67.65
I’m trying to install TensorRT 5 from the tar file on Ubuntu 16.04, with CUDA 9.0 and cuDNN 7.3.1. Once I pip installed the wheels file, I tried import tensorrt in a python shell, but get Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorrt/__init__.py", line 1, in <module> from .tensorrt import * ImportError: /usr/local/anaconda3/lib/python3.6/site-packages/tensorrt/tensorrt.so: undefined symbol: _Py_ZeroStruct Any idea on how to solve this? I used the tar file instead of the deb file for installation because the deb file always installs in the default python /usr/bin/python instead of the conda one.
https://forums.developer.nvidia.com/t/tensorrt-import-undefined-symbol/66518
CC-MAIN-2021-43
refinedweb
123
59.3
My Google-fu isn’t good enough for this one. For Learn Python3 The Hard Way Exercise 23, under Breaking It: 3) I sort of got that bit to work, but I had to cheat and put the actual byte string in the script and break the loop. Like so: Some Terrible Code import sys script, input_encoding, error = sys.argv def main(language_file, encoding, errors): print(">>>> Entering main function") line = language_file #.readline – I’m not reading a line right now if line: print_line(line, encoding, errors) #return main(language_file, encoding, errors) print("<<<< Exiting main function") def print_line(line, encoding, errors): print(">> Entering print_line function") next_lang = line.strip() print(next_lang) cooked_string = next_lang.decode(encoding, errors=errors) raw_bytes = cooked_string.encode(encoding, errors=errors) print(cooked_string, "<====>", raw_bytes) print("<< Exiting print_line function") languages = b’\xd0\x90\xd2\xa7\xd1\x81\xd1\x88\xd3\x99\xd0\xb0’ #or another byte string #languages = open(‘bytes.txt’) #languages = open(“bytes.txt”, ‘r’, ‘b’) #languages = open(“bytes.txt”, encoding=“utf-8”) #none of these lets python read the context of “bytes.txt” as a byte string main(languages, input_encoding, error) That works fine, but I can’t quite figure out how if I have a text file with a few lines of typed out byte strings to get python to read them as byte strings and not a strings. I tried changing the parameters of opening the file away from encoding=“utf-8”, I tried just a default open with no parameters and with ‘r’ and ‘b’ as parameters. I also, not displayed here, tried futzing with the actual input-encoding, but I’m not sure if that makes a difference. Regardless, no matter what change I’ve made whenever I print next_lang rather than a byte string it prints out something like “b’\\xd0\\x90\\xd2\\xa7\\xd1\\x81\\xd1\\x88\\xd3\\x99\\xd0\\xb0’” Which does make total sense with the double slashes if that was a string, but I don’t want Python to read the lines as a string. I’m sure I’m missing some obvious way to Google this, but I’m not sure what I should be precisely Googling, which usually isn’t something I run into. A hint or slight point in the right direction is all I really need here.
https://forum.learncodethehardway.com/t/solved-lpthw3-exercise-23-extra-credit-3-reverse-script/71
CC-MAIN-2022-40
refinedweb
381
60.85
Text files are not the only type of flat files you can use. You can also use binary files. Binary files are still flat files; however, they are in basic binary format rather than ASCII. Remember that everything in a computer is binary (1’s and 0’s). The binary format has each file stored in a byte-by-byte format. This produces some odd-looking results if you try to view binary files like plain text. For example, an integer might be represented by four separate characters, because it occupies four bytes. You might ask why anyone would want to open a binary file. There are a number of reasons, the most prominent being that there is still old data stored this way. However, one useful thing you can do is to open any file in binary mode and count the bytes in it, to see how big the file is. This next example demonstrates this. Step 1: Enter the following code in your favorite text editor. #include <iostream> using namespace std; #include <fstream> int main () { long start,end; // Recall from chapter one, that a long is simply // large integers ifstream myfile ("test.txt", ios::in|ios::binary); start = myfile.tellg(); myfile.seekg (0, ios::end); end = myfile.tellg(); myfile.close(); cout << "size of " << "test.txt"; cout << " is " << (end-start) << " bytes.\n"; return 0; } Step 2: Compile this code. Step 3: Run the executable. You should see something like Figure 6.2. Figure 6.2: Binary file size. This code is relatively straightforward, let’s take a look. To begin with, we open a file just as we would normally do, except we open it as binary, with the following line of code. ifstream myfile ("test.txt", ios::in|ios::binary); Next, we retrieve the beginning point for the file. start = myfile.tellg(); And then we get the endpoint of the file, by first moving to the end of the file then getting the position. myfile.seekg (0, ios::end); end = myfile.tellg(); The first line says to start at the 0 byte/character and search until the end of the file. If you wanted to start at another point such as the 10th character or byte, you could simply put in a 10 where the 0 is. Now it’s just a matter of subtracting the beginning point from the endpoint and that’s how many bytes of data the file has in it. The process is not particularly complicated. However, you might be wondering about the ios you keep seeing. That is simply C++’s way of saying “input/output stream.”
https://flylib.com/books/en/2.331.1.54/1/
CC-MAIN-2021-25
refinedweb
434
76.11
creating global constant using strcatIt's easy if you use std::string. [code]const std::string str1 = "ABC"; const std::string str2 = "D... Question about class member function initialization...You misspelled ConstructLine on line 10. UDP messages lost[quote]My sender program connects to its own IP-addres[/quote] Instead of your own IP-address should... Missing function header ( old-style formal list?)Standard headers don't end with [tt].h[/tt]. [code] #include <iostream>[/code] Swap results in non-terminating recursion in Assignment Operatorstd::swap will call the [i]copy assignment operator[/i] because you have not defined a [i]move assig... This user does not accept Private Messages
http://www.cplusplus.com/user/Peter87/
CC-MAIN-2016-44
refinedweb
110
53.07
This is the mail archive of the libstdc++@sources.redhat.com mailing list for the libstdc++ project. On Fri, Sep 22, 2000 at 11:00:24AM -0700, Steven King wrote: > > This wont compile with the current library because one of the headers > indirectly #included from locale is iconv.h which defines a function iconv. > This is not, to the best of my knowledge, a reserved symbol. It's not mentioned in the C++ standard, but I don't know which standard defines it. (C, POSIX, etc) I don't think it's C. I'm fairly certain it's "standard" but don't know whose. > That the > compilers error message, "`i' undeclared (first use this function)", doesnt > actually tell me whats really wrong doesnt help either. That is somewhat funky. What happens if you change the declaration to struct iconv *i = new struct iconv ("?"); I don't have a compiler to test with. > My thought is that any non-standard symbols used by the library be placed in an > implementation reserved namespace and if they are imported into namespace std, > be given an implementation reserved name. ie for bits/codecvt.h (the header > that #includes iconv.h) we might have something like [...example...] > The reason I'm bring this up, is at the very least it impacts the shadow header > layout. > > Any thoughts? Objections? Alternatives? We should first find out whether iconv should be treated as reserved. If it isn't, then your suggestion sounds good. Ironic that it would be in _C_legacy: iconv is for character conversion between things like old 7-bit ASCII and new-fangled schemes like Unicode, after all, and "legacy" code is more likely to *not* be using iconv. :-) Phil -- pedwards at disaster dot jaj dot com | pme at sources dot redhat dot com devphil at several other less interesting addresses in various dot domains The gods do not protect fools. Fools are protected by more capable fools.
http://gcc.gnu.org/ml/libstdc++/2000-09/msg00070.html
crawl-001
refinedweb
323
74.49
What will we cover in this tutorial A high level view of the differences of NumPy and Pandas libraries in Python. We will also make a short exploration of the performance differences in a specific use case. Top level differences between NumPy and Pandas First of all, the purpose of these libraries are different. - NumPy is made to manage n-dimensional numerical data. Think of it if you need to handle a lot of data all of the same type and numerical, but categorized in columns and rows. - Pandas is made for tabular data. This could be data from an excel sheet, where you have various types of data categorized in rows and columns. There are more differences. - NumPy consist of the data type ndarray, which is create with fixed dimensions with only one element type. - Pandas consist of Series and DataFrames, which are more dynamic after creation. Performance comparison of NumPy and Pandas If you should guess? Pandas? Of course not. NumPy is great magnitude faster than Pandas. Why? Let us first examine it. import time import numpy as np import pandas as pd size = 100 iterations = 100000000//size a = np.arange(size) start = time.time() for _ in range(iterations): a2 = a * a end = time.time() print(end - start) n = pd.Series(a) start = time.time() for _ in range(iterations): n2 = n * n end = time.time() print(end - start) Which results in the following comparison. I find it very interesting that the speed is so slow for small instances of Pandas, comparing to NumPy, while later it seems to go to Pandas advantage, but eventually it still seems to be NumPy. Well, the flexibility of Pandas has a cost, which is high for small instances when making arithmetic operations as we did in the above example. Next steps Investigate further how NumPy and Pandas compare in performance for various functions. Pandas and NumPy support a lot of functions in a vectorized way, which could be interesting to investigate. Do the restrictions of NumPy arrays give the underlying C/C++ code an advantage in performance?
https://www.learnpythonwithrune.org/numpy-vs-pandas/
CC-MAIN-2021-25
refinedweb
347
66.94
The QGridLayout class lays out widgets in a grid. More... #include <qlayout.h> Inherits QLayout. List of all member functions. The QGridLayout class lays out widgets in a grid. QGridLayout takes the space made available to it (by its parent layout or by the main addColSpacing() and the minimum width of each widget in that column. The stretch factor is set using setColStretch() and determines how much of the available space the column will get over and above its necessary minimum. Normally, each managed widget or layout is put into a cell of its own using addWidget(), addLayout() or by the auto-add facility. It is also possible for a widget to occupy multiple cells using addMultiCellWidget(). If you do this, QGridLayout will guess how to distribute the size over the columns/rows (based on the stretch factors). To remove a widget from a layout, call remove(). addColSpacing(). addColSpacing() and setCol parentLayout->addLayout(). Once you have added your layout you can start putting widgets and other layouts into the cells of your grid layout using addWidget(), addLayout() and addMultiCellWidget(). QGridLayout also includes two margin widths: the border and the spacing. The border is the width of the reserved space along each of the QGridLayout's four sides. The spacing is the width of the automatically allocated spacing between neighboring boxes. Both the border and the spacing are parameters of the constructor and default to 0. See also QGrid, Layout Overview, Widget Appearance and Style, and Layout Management. This enum identifies which corner is the origin (0, 0) of the layout. margin is the number of pixels between the edge of the widget and its managed children. space is the default number of pixels between cells. If space is -1, the value of margin is used. You must insert this grid into another layout. You can insert widgets and layouts into this layout at any time, but laying out will not be performed before this is inserted into another layout. This grid is placed according to parentLayout's default placement rules. The layout's widgets aren't destroyed. Sets the minimum width of column col to minsize pixels. Use setColSpacing() instead. Adds item to the next free position of this layout. Reimplemented from QLayout. layout becomes a child of the grid layout. When a layout is constructed with another layout as its parent, you don't need to call addLayout(); the child layout is automatically added to the parent layout as it is constructed. See also addMultiCellLayout(). Examples: listbox/listbox.cpp, progressbar/progressbar.cpp, t10/main.cpp, and t13/gamebrd.cpp. The cell will span from fromRow, fromCol to toRow, toCol. Alignment is specified by alignment, which is a bitwise OR of Qt::AlignmentFlags values. The default alignment is 0, which means that the widget fills the entire cell. Alignment is specified by alignment, which is a bitwise OR of Qt::AlignmentFlags values. The default alignment is 0, which means that the widget fills the entire cell. A non-zero alignment indicates that the layout should not grow to fill the available space but should be sized according to sizeHint(). layout becomes a child of the grid layout. See also addLayout(). Alignment is specified by alignment, which is a bitwise OR of Qt::AlignmentFlags values. The default alignment is 0, which means that the widget fills the entire cell. A non-zero alignment indicates that the widget should not grow to fill the available space but should be sized according to sizeHint(). See also addWidget(). Examples: cursor/cursor.cpp, layout/layout.cpp, and progressbar/progressbar.cpp. Sets the minimum height of row row to minsize pixels. Use setRowSpacing() instead. Alignment is specified by alignment, which is a bitwise OR of Qt::AlignmentFlags values. The default alignment is 0, which means that the widget fills the entire cell. See also addMultiCellWidget(). Examples: addressbook/centralwidget.cpp, layout/layout.cpp, rot13/rot13.cpp, sql/overview/form1/main.cpp, sql/overview/form2/main.cpp, t14/gamebrd.cpp, and t8/main.cpp. Warning: in the current version of Qt this function does not return valid results until setGeometry() has been called, i.e. after the mainWidget() is visible. See also setColSpacing(). See also setColStretch(). Reimplemented from QLayout. Note: if a widget spans multiple rows/columns, the top-left cell is returned. Reimplemented from QLayoutItem. Reimplemented from QLayoutItem. Reimplemented from QLayout. Reimplemented from QLayout. Reimplemented from QLayout. See also setRowSpacing(). See also setRowStretch(). See also colSpacing() and setRowSpacing().. See also colStretch(), addColSpacing(), and setRowStretch(). Examples: layout/layout.cpp, t14/gamebrd.cpp, and t8/main.cpp. Reimplemented from QLayout. See also rowSpacing() and setColSpacing().Spacing(), and setColStretch(). Examples: addressbook/centralwidget.cpp and qutlook/centralwidget.cpp. Reimplemented from QLayoutItem. This file is part of the Qt toolkit. Copyright © 1995-2005 Trolltech. All Rights Reserved.
http://doc.trolltech.com/3.3/qgridlayout.html
crawl-002
refinedweb
795
53.27
The Files class in JDK has a getAttribute() that accepts a Path object and returns a specified attribute. import java.nio.file.Files import java.nio.file.Paths fun main(args: Array<String>) { val path = if (args.isEmpty()) { Paths.get(System.getProperty("user.dir")) } else { Paths.get(args[0]) } if(Files.isDirectory(path)){ Files.list(path).forEach({ it -> //getAttribute() asserts non-null, so we can add compiler null checks by declaring a nullable type if desired val creationTime = Files.getAttribute(it, "creationTime") val lastModified = Files.getAttribute(it, "lastModifiedTime") val size = Files.getAttribute(it, "size") val dir = Files.getAttribute(it, "isDirectory") println("${it.fileName}") println("Creation Time => $creationTime") println("Last Modified => $lastModified") println("Size => $size") println("Directory => $dir") println() }) } else { println("Please enter a path to a directory") } } The demonstration of Files.getAttribute() is found on lines 15 – 18. In each case, the getAttribute() method accepts a Path object, a string name of the attribute, and optional LinkOptions. The value returned is of type any, and the return type is assert not null (!!) operator. Since the names of the attributes is a string, there is the possibility that an exception could get thrown also. The getAttribute() method can be convient, but it has short comings. Since it is a Java class, Kotlin interprets it as returning assert non-null. This may or may not be a problem. The java-doc makes no mention of a null return type, so it’s most likely safe to assume non-null types. However, it compiler checks are wanted or desirable, then we can declare nullable types. More troubling is the fact that Strings are used for the attribute types. We have no protections against a typo such as “sizes” when we meant “size”. We will get a runtime exception in the event that attribute doesn’t exist. Alway check the java document prior to using the getAttribute() method to make sure the attribute is available first, keeping in mind that some attributes are platform specific. Note that there is also a readAttributes() method, found in an upcoming post, that offers much better type safety than getAttribute(). However, the getAttribute() is useful in the case that we wish to check only on specific attribute (such as size), without having to use additional objects. References(java.nio.file.Path,%20java.lang.String,%20java.nio.file.LinkOption…)
https://stonesoupprogramming.com/2017/12/01/koltin-files-getattribute/
CC-MAIN-2022-05
refinedweb
390
57.87
NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | ENVIRONMENT VARIABLES | ATTRIBUTES | SEE ALSO | DIAGNOSTICS | NOTES The nisgrpadm utility is used to administer NIS+ groups. This command administers both groups and the groups' membership. Principal names must be fully qualified, whereas groups can be abbreviated on all operations except create. The following options are supported: Adds the list of NIS+ principals specified to group. The principal name should be fully qualified. Creates group in the NIS+ namespace. The NIS+ group name should be fully qualified. Destroys (removes) group from the namespace.- - -. Lists the membership list of the specified group. (See -M option.) Master server only. Sends the lookup to the master server of the named data. This guarantees that the most up to date information is seen at the possible expense that the master server may be busy. Note that the -M flag is applicable only with the -l flag. Removes the list of principals specified from group. The principal name should be fully qualified. Work silently. Results are returned using the exit status of the command. This status can be translated into a text string using the niserror(1) command. Displays whether the principals specified are members in group. This example shows how to create a group in the foo.com. domain: This example shows how to remove the group from the current domain. This example shows how one would add two principals, bob and betty, to the group my_buds.foo.com.: This example shows how to remove betty from freds_group: This variable contains a defaults string that will override the NIS+ standard defaults. If this variable is set, and the NIS+ group name is not fully qualified, each directory specified will be searched until the group is found (see nisdefaults(1)). See attributes(5) for descriptions of the following attributes: nis+(1), nischgrp(1), nischmod(1), nischttl(1), nisdefaults(1), niserror(1), nis_groups(3NSL), attributes(5) On success, this command returns an exit status of 0. When you do not have the needed access right to change the group, the command returns this error. This is returned when the group does not exist. This error is returned when the server for the group's domain is currently checkpointing or otherwise in a read-only state. The command should be retried at a later date.. NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | ENVIRONMENT VARIABLES | ATTRIBUTES | SEE ALSO | DIAGNOSTICS | NOTES
http://docs.oracle.com/cd/E19683-01/816-5211/6mbcci3hu/index.html
CC-MAIN-2017-04
refinedweb
394
58.69
Perforce Defect Tracking Integration Project This manual is the Perforce Defect Tracking Integration 1.0 Administrator's Guide. It explains how to install, configure, maintain, and administer the Perforce Defect Tracking Integration (P4DTI). Command Line User's Guide. Perforce has a mechanism for linking jobs to changelists,. To administer the P4DTI, you must have the following experience: Before installing the P4DTI, you must obtain and install the following software: Before installing the P4DTI, you must do the following: Before installing the P4DTI, you must obtain and install the following software: Before installing the P4DTI, you must do the following: Before installing the P4DTI, you must obtain and install the following software: syslogmodule. An RPM of Python 1.5.2 for RedHat Linux 6.2 is available from <>. An RPM of Python 1.5.2 for RedHat Linux 7.0 is available from <>. Before installing the P4DTI, you must do the following:. For instructions on how to upgrade from an earlier version of the P4DTI, see the release notes. The P4DTI is distributed as a self-extracting executable called p4dti-DT-RELEASE.exe (where DT is the defect tracker, such as "teamtrack", and RELEASE is the release number, such as "1.0.2"). To install the P4DTI, run this executable on the machine where the defect tracker server is installed. The installer unpacks the P4DTI into C:\Program Files\P4DTI\ by default. The P4DTI is distributed as an RPM called p4dti-DT-RELEASE-1.i386.rpm where DT is the defect tracker, such as "bugzilla", and RELEASE is the release number, such as "1.0.2"). To install the P4DTI, run the following command as root on the defect tracker server machine: rpm -i p4dti-DT-RELEASE-1.i386.rpm This will install "1.0.2").. administrator_address Description: The e-mail address of the P4DTI administrator. Example: "p4dti-admin@company.domain" The replicator sends error reports to this address. If this is None, then the replicator will never send e-mail. bugzilla_directory Description: Bugzilla only. The directory in which Bugzilla is installed, or None if you don't want e-mail processed.. changelist_url Description: A format string used to build a URL for change descriptions. Specify None if there is no URL for change descriptions. Example: "" The string is a format string valid for passing to sprintf(); it must have one %d format specifier, for which the change number is substituted. (Note that because it gets passed to sprintf(), you must specify other percent signs twice.) Defect trackers that support this feature list the changelists that fix each issue, and make a link from each changelist to this URL, with the change number substituted. If you are using perfbrowse, then a valid format string looks like "". If you are using P4Web, then a valid format string looks like "" closed_state Description: The defect tracker state that maps to the "closed" state in Perforce. Specify None if you want the ordinary state mapping rules to apply. Example: "Resolved" in TeamTrack; "RESOLVED" in Bugzilla. Mapping the defect tracker state that developers use most often to the "closed" state in Perforce makes using the P4DTI easier for the developers, because the Perforce user interfaces make it easier to fix a job to "closed" than any other state. If you are using TeamTrack and your workflow already has a state called "Closed", then that state must map to "closed" in Perforce; set this variable to None. The "CLOSED" state in Bugzilla maps to "bugzilla_closed" in Perforce. dbms_host Description: Bugzilla only. The host on which the Bugzilla MySQL server is running. Example: "localhost" Set this value to "localhost" if the P4DTI and the Bugzilla MySQL server run on the same machine. dbms_database Description: Bugzilla only. The name of the MySQL database in which Bugzilla stores its data. Example: "bugs" Normally set to "bugs" during Bugzilla installation (see the Bugzilla README file). Change this setting only if you have set up Bugzilla differently. dbms_password Description: Bugzilla only. The password that the replicator uses to log in to MySQL to use the Bugzilla database. Example: "" Bugzilla normally logs in with no password (see the Bugzilla README file). Change this setting if you have configured Bugzilla differently, or you want to set up the replicator to log in as a different user and use a password. dbms_port Description: Bugzilla only. The port number on which the Bugzilla MySQL server listens on the database host ( dbms_host). Example: 3306 MySQL normally listens on port 3306. Change this setting only if you have set up MySQL differently. Note that this parameter is expressed as a number, not as a string. dbms_user Description: Bugzilla only. The user name that the replicator uses to log in to MySQL to use the Bugzilla database. Example: "bugs" Bugzilla normally logs in to MySQL as user "bugs" (see the Bugzilla README file). Change this setting only if you have configured Bugzilla differently, or if you want to set up the replicator to log in as a different user. dt_name Description: The name of the defect tracking system you're integrating with. Either "TeamTrack" or "Bugzilla". Example: "TeamTrack" Make sure that this variable is set to the appropriate value for your defect tracker. log_file Description: The name of the replicator's log file. If log messages should not be sent to a file, specify None. Example: "C:\\Program Files\\P4DTI\\p4dti.log" The replicator generates log messages to record its actions. These log messages are sent to all of the following locations: log_level Description: The minimum priority level of messages to log. Messages with this priority or a higher priority will appear in the replicator's log. Example: message.INFO This parameter should (type p4 -V to check the client version). p4_password Description: The password the replicator uses to log in to the Perforce server. If there is no password, specify "" (empty quotes). Example: "" For information about how the replicator logs in to Perforce, see section 5.2, "Perforce configuration". p4_port Description: The address and port of the Perforce server with which the replicator communicates. Example: "perforce.company.domain:1666" replicate_p Description: A function that selects which issues to start replicating. Normally, the P4DTI replicates all issues created or modified after the start_date, but you can modify this function to further restrict the issues. Some Python programming is required. Example for TeamTrack that restricts replication to issues belong to the project whose ID is 6: def replicate_p(self): return self['PROJECTID'] == 6 Example for Bugzilla that restricts replication to unresolved issues in the "nosebag" product: def replicate_p(self): return self.bug['product'] == 'nosebag' and self.bug['resolution'] == '' Note that once an issue starts being replicated it remains replicated, even if is no longer matches the criteria. replicated_fields(for Bugzilla).1, "Choosing which fields to replicate". replicated_fields(for TeamTrack) Description: A list of the database names of TeamTrack fields that are replicated in Perforce. The fields STATE, OWNER, and TITLE are always replicated, so omit those fields when setting this variable. Example: ["DESCRIPTION", "PRIORITY", "SEVERITY"] For advice on which fields to replicate, and how to find out their database names, see section 5.1.1, "Choosing which fields to replicate". will stop being replicated. The replicator will believe. smtp_server Description: The address of the SMTP server that the replicator uses to send e-mail. Example: "smtp.company.domain" If this is None, then the replicator will never send e-mail. start_date Description: The starting point in time for replication. Example: "2001-02-10 00:00:00" Issues modified after this date will be replicated; issues unchanged after this date will be ignored. Must be a string in the form "YYYY-MM-DD HH:MM:SS". teamtrack_password Description: TeamTrack only. The password that the replicator uses to log into TeamTrack. If there is no password, specify "" (empty quotes). Example: "" See section 5.3.2, "Creating a TeamTrack user for the replicator". teamtrack_server Description: TeamTrack only. The TeamTrack server hostname and (optionally) port with which the replicator communicates. Example: "teamtrack.company.domain:80" (Note that "localhost" won't work, even if the TeamTrack eerver is on the local host.) teamtrack_user Description: TeamTrack only. The user name that the replicator uses to log into TeamTrack. Example: "P4DTI-replicator0" See section 5.3.2, "Creating a TeamTrack user for the replicator". Here's some advice on which fields to replicate: If you're using TeamTrack's sample database, you might want to replicate the following fields: To find out the database name of a TeamTrack field, follow these steps: If you're using Bugzilla, you might want to replicate the following fields: If you're using Bugzilla, the replicator rejects the following types of changes from within Perforce: The following table lists the field names for Bugzilla 2.10. If you have modified Bugzilla, your field names may differ. To display the set of Bugzilla field names, type mysqlshow bugs bugs at a shell prompt. Table 2. Bugzilla field names: p4_user) that you specified in the P4DTI configuration.) configuration parameter. You must delete all jobs from your Perforce installation. The P4DTI takes over the jobs subsystem of Perforce and rewrites the Perforce jobspec. For instructions, see the Perforce Command Line User configure TeamTrack, you must: You need to add a TeamTrack value to the Windows Registry to tell TeamTrack that the P4DTI is present. To do this, double-click the p4dti.reg file that comes with the P4DTI (it's installed in c:\program files\p4dti\p4dti.reg by default). You need to create a TeamTrack user for the replicator. This user corresponds to the replicator TeamTrack userid ( teamtrack_user) parameter you set in section 5.1, "P4DTI configuration". To create a TeamTrack user for the replicator, follow these steps: teamtrack_user) parameter). For information on getting a license from TeamShare for this extra user, see section 3.3, "TeamTrack prerequisites". The replicator uses TeamTrack issue field descriptions as the source for the Perforce job field descriptions. These job field descriptions appear in comments in every job form (if you're using the Perforce command line) and as tooltips for the fields in the job editing dialog (if you're using P4Win, the Perforce Windows GUI).: To configure Bugzilla, you must: You need to make some minor modifications to the Bugzilla code so that users can see Perforce information on Bugzilla bug forms. These modifications are distributed as a patchfile for version 2.10 of Bugzilla. patch changes the following Bugzilla files: bug_form.pl, which adds a Perforce section to the bug form. defparams.pl, which adds a parameter to control whether or not the Perforce section appears. These changes are small and self-contained. If your changes do not affect these two files or only affect them in minor ways, the patch should operate correctly. If the patch program fails because of your Bugzilla modifications, it might still be possible to introduce the changes by hand. If you cannot apply the patch, the replicator might still work, but the Bugzilla bug form will not show Perforce-specific information (for example, changelists that are linked to the bug by a "fix"). The operation of the replicator itself is affected only if you have made drastic changes to Bugzilla (for example, if you have completely removed the "bug_status" column from the "bugs" table). To apply the patch, follow these steps: (where p4dti-install-dir is your P4DTI installation directory).(where p4dti-install-dir is your P4DTI installation directory). patch < p4dti-install-dir/bugzilla-2.10-patch. To enable the extensions, follow these steps: To disable use of the P4DTI from the Bugzilla user interface, switch the extensions off. To start the replicator, follow these steps from the operating system command line: python run.py. Alternatively, on Linux you may wish to start the script using the automatic startup script: /etc/rc.d/init.d/p4dti start The first time you start the replicator, it displays log output explaining how the replicator is setting up the defect tracker schema extensions, as shown in the following figure: Each log entry consists of the date of the entry, a message identifier, and the message text. During its startup sequence, the replicator creates Perforce jobs corresponding to every defect tracker issue created or modified after the ( start_date). It then polls for changes every poll_period seconds and replicates those changes. Figure 6 shows typical replicator log output when it is replicating a change. To stop the replicator, follow these steps: If installed the P4DTI using the Linux RPM as described in section 4.3, "Linux installation", a startup script is automatically created in /etc/rc.d/init.d directory, so that the replicator starts when the machine is booted. On Solaris or other Unixes, you might want to adapt the Linux startup script. It is in the file named startup-script in the installation directory. On Windows, you must manually start the P4DTI whenever you reboot the machine. You might want to make this easier by creating a shortcut in the "Startup" folder of the "Start" menu of the Administrator.. Migrating your Perforce jobs to the defect tracker is not a straightforward operation. The strategy for migration is to convert Perforce jobs and Perforce fixes to the defect tracker, delete all the Perforce jobs and fixes, then replicate them back from the defect tracker. You'll need some defect tracker expertise in order to set up the defect tracker to be ready for the migration. You'll need to work out how the fields in your Perforce jobs correspond to the fields in your the defect tracker cases. You'll need to do some Python programming to specify how the conversion should take place. database using a database application (for example, Microsoft Access or re-start the replicator as described in section 5.5, "Starting the replicator manually" after changing any of the configuration parameters described in section 5.1, "P4DTI configuration". You must also stop and re-start the replicator after adding new users to your defect tracker or changing a user's userid or e-mail address. You might then need to edit jobs which mention that user in a field. The p4 jobs -e command can be used to search for text in the Perforce jobs. deletes all the existing jobs in Perforce and replicates them from the defect tracker's database. This procedure is useful"). An error message beginning "Bugzilla database error" indicates that"). Preliminary checking of the parameters set in config.py has found a problem. Correct the named parameter and start the P4DTI again. You are running a version of Bugzilla with different bug statuses from those in Bugzilla 2.10. will use the following translation table for the default Bugzilla statuses: Alternatively, if the closed_state parameter is 'CLOSED' or None, the P4DTI will use this field to be replicated, see the advice for (P4DTI-4067).. Reduce the number of fields that you replicate by removing items from the replicated_fields parameter. The P4DTI chooses the names of states of Perforce jobs based on the state names in TeamTrack. It uses the following mapping system: Resolve this problem by making the state names distinct in TeamTrack. Do not use spaces at the beginning of state names. See (P4DTI-301X). You can specify a list of fields for the P4DTI to replicate into jobs; for details, see the replicated_fields parameter. This error means that the P4DTI couldn't find one of the fields in the list. This problem might happen if you change the set of fields in TeamTrack. Double-check the field names you specified as the replicated_fields. If you're changing fields in TeamTrack, see section 9, "Maintaining the P4DTI", for important information. See (P4DTI-3111). See (P4DTI-3122). The P4DTI doesn't support all TeamTrack field types. One of the fields in your replicated_fields parameter has an unsupported type.: replicator.translatorto handle the field type (for the existing translators, see dt_teamtrack.py). For instructions on how to extend the P4DTI, and how to contribute your extensions back to the community, see the Perforce Defect Tracking Integration Integrator's Guide. Perforce uses the field "code" to pass internal status information to clients. In TeamTrack, change the logical name of the field to something other than "code" by following these steps: See (P4DTI-3177). A Perforce user has made a change to a bug which Bugzilla would not allow them to view.1, "Choosing which fields to replicate". A Perforce user has changed the long description text in some way other than appending to it. See section 5.1.1, ..4.2, "Creating a Bugzilla user for the replicator". You have changed a user field in a job to a Perforce user who does not have a Bugzilla user record. The replicator is unable to replicate that field back to Bugzilla. A user in Perforce has changed the state of a job in an illegal fashion (for example, changing the state from "assigned" to "verified", bypassing the state "resolved"). The user should go back to Perforce and change the state legally. Someone has reconnected the TeamTrack server to a new TeamTrack database without first stopping the P4DTI. Either reconnect to the old TeamTrack database or restart the P4DTI. You're running an old version of TeamTrack that isn't supported by the P4DTI. Upgrade your TeamTrack server to a supported version. See section 3.3.1, "TeamTrack software prerequisites". A user in Perforce changed the state of a job to a state that is not legal for the project to which the job belongs. (Note that the state is legal in some other project, otherwise it wouldn't be possible to set the job to that state.) The user should go back to Perforce and set the job to a state that is legal for its Perforce client executable specified by the p4_client_executable parameter is an old version not supported by the P4DTI. Install a supported version (see section 3.2.1, "Perforce software prerequisites") and set the p4_client_executable parameter to name it. You don't have a licence for the replicator. See section 3.2.1, "Perforce software prerequisites". You will see this error if you changed the p4_user parameter but didn't delete the old userid. You haven't given the replicator permission to edit the Perforce jobspec. The replicator needs to have superuser privileges in Perforce. For instructions, see section 5.2.1, "Creating a Perforce user for the replicator". You are running an old version of the Perforce server that is not supported by the P4DTI. Upgrade to a supported version; see section 3.2.1, "Perforce software prerequisites".. We've seen this error when our SMTP server was refusing connections. Check your smtp_server examine..
http://www.ravenbrook.com/project/p4dti/release/1.1.1/ag/
crawl-002
refinedweb
3,112
57.67
Set Up Salesforce DX Learning Objectives - Describe how the model for traditional org development differs from modular package development. - Describe the key characteristics of a package.. To learn how to migrate your existing dev processes to the package development model, head over to the Package Development Model module. But enough with the chit-chat. Are you ready to start getting hands-on? Let’s get started setting up your environment and introducing you to some new and some familiar tools.. Do Scratch Orgs Replace Sandboxes? No. Scratch orgs aren’t meant to be replications of sandboxes or production orgs. Due to their ephemeral nature (and maximum 30-day lifespan), scratch orgs are perfect for developing a new feature or customization or package. And they work great for unit testing and continuous integration. Sandboxes, which contain all the metadata of your production org, are still necessary for final user-acceptance testing, continuous delivery, and staging. So far, so good? Read on.... You can also make any paid org your Dev Hub and grant access to developers. Get the details in the Salesforce DX Setup Guide. Now that you have a Dev Hub org, let’s set up the rest of your Salesforce DX tools. Install the Command Line Interface (CLI) - Install the CLI. Let’s make sure the CLI is properly installed and you know how to access online help for the commands. - In a command window, enter sfdx. You’ll see something like this: Usage: sfdx COMMAND [command-specific-options] Help topics, type "sfdx help TOPIC" for more details: sfdx force # tools for the salesforce developer sfdx plugins # manage plugins sfdx update # update sfdx-cli Here are some other helpful commands to get you started: OK, you’re well on your way. Now let’s continue installing the rest of the Salesforce DX tooling. Log In to the Dev Hub To get started, log in to the Dev Hub using the CLI, so you’re authorized to create scratch orgs. You can use sfdx force:auth:web:login to log in to various orgs, and we’ve provided some options to help you manage those orgs. sfdx force:auth:web:login -h sfdx force:auth:web:login --help - To authorize the Dev Hub, use the web login flow: sfdx force:auth:web:login -d -a DevHub Adding the -d flag sets this org as the default Dev Hub. Use the -a to set an alias for the org (something catchy like DevHub). An alias is much easier to remember than the unique Dev Hub username. - Log in with your credentials. Once successful, the CLI securely stores the token along with the alias for the org, in this example, DevHub. You can close the Dev Hub org at any time. You can close the Dev Hub and still create scratch orgs. However, if you want to open the Dev Hub org to look at active scratch orgs or your namespace registry, the alias comes in quite handy: sfdx force:org:open -u DevHub A Bit More on Org Management It’s likely you have many orgs, including sandboxes and your production org. With the CLI, you can also log in to them using these commands. When you log in to an org using the CLI, you add that org to the list of orgs that the CLI can work with in the future. Log In to Sandboxes If you create an alias for the sandbox (-a option), you can reference it by this alias instead of its long and often unintuitive username. For example: sfdx force:auth:web:login -r -a FullSandbox sfdx force:auth:web:login -r -a DevSandbox The Power of Aliasing As you might imagine, aliasing is a powerful way to manage and track your orgs, and we consider it a best practice. Why? Let’s look at scratch org usernames as an example. A scratch org username looks something like test-7emx29rtpx0y@example.com. Not easy to remember. So when you issue a command that requires the org username, using an alias for the org that you can remember can speed things up. sfdx force:org:open -u FullSandbox sfdx force:org:open -u MyScratchOrg sfdx force:limits:api:display -u DevSandbox View All Orgs At any point, you can run the command sfdx force:org:list to see all the orgs you’ve logged in to. Adding the --verbose option provides you even more info. Now you’re really ready to get going—let’s go build a new app with the Salesforce CLI and scratch orgs.
https://trailhead.salesforce.com/en/content/learn/modules/sfdx_app_dev/sfdx_app_dev_setup_dx
CC-MAIN-2020-29
refinedweb
761
70.94
Introduction With the latest release of C# ASP .NET 3.5, Visual Studio 2008, and LINQ, there is a whole new way of working with the data layer in C# ASP .NET web applications. Prior to LINQ, many developers would either generate or custom-code a data layer based upon the usage of SqlConnection, SqlCommand, and DataReader objects to manipulate data. With LINQ and LINQ to SQL, we no longer need to rely on the details of the connection objects and can instead begin thinking more abstractly by using the pre-generated LINQ business objects and the data access layer. In this manner, LINQ has brought a conformity to the data layer in ASP .NET web applications. However, as with any new technology, new architectures often arise to support them. As we continue on, you'll see how to implement your own tiered architecture, specifically for populating datasource controls. In this article, we'll visit three ways to populate a DropDownList control by binding to a data source. We'll cover the simple, the direct, and the 3-tier enterprise method. Simple as a Text File The first method for populating an ASP .NET DropDownList with LINQ is by indicating directly in the ASPX file itself, where the datasource is and what fields to bind to. We can do this in a .NET 3.5 web application through the use of the LinqDataSource control. Binding with the LinqDataSource tag <asp:DropDownList <asp:LinqDataSource What we have above is a basic DropDownList control placed on the web page. Notice that we pre-set the DataTextField and DataValueField properties to indicate the fields we wish to display. We also included an additional property, called DataSourceID, which specifies the location of the data source to actually pull data from. In our case, the data source will be a LinqDataSource control. The second tag is the LinqDataSource control, specific to .NET 3.5. Your web application must target the 3.5 framework in order to have this tag available. To use the LinqDataSource tag, you simply specify the ContextTypeName, which is the fully qualified name to your LINQ to SQL data context. You then specify the TableName within the context to pull the data from. LINQ will automatically pull the appropriate fields that you specified in the DropDownList tag. What's Good About Simple? The LinqDataSource tag is a very straight-forward method for binding to a DropDownList. It requires no code-behind work. It's easy to use, easy to understand, and extremely easy to update. In fact, since the code exists in the ASPX file itself, which is really just a glorified HTML file, we can easily make changes to the data binding source, while the web application is runnings, without having to recompile source code or DLLs. This can be an advantage if you may be frequently changing data sources or need this kind of control over the data. What's Bad About Simple? While simple is always a good thing, it's not always right. Depending on your project's architecture, binding data directly to LINQ in your ASPX page may violate the tiered layers and break boundaries in your design. Worse than that, the tag itself doesn't provide you with any type of business object that could be used elsewhere in the web application, such as passing to a web service, using in composition, design patterns, etc. While it was easy to insert the tag and bind to data, code reuse in this situation is limited. Writing Some Code with Direct LINQ Binding The second way for binding a DropDownList to a LINQ to SQL table is through directly binding to the LINQ table itself. This is almost exactly the same as the simple example above, with the exception of leaving out the DataSourceID field, since we'll be directly populating that ourselves, in the code-behind. Binding directly to a LINQ table <asp:DropDownList Our tag is just a simple DropDownList control with the text and value field names specified. The names should match those in the LINQ table you plan to bind to. The difference between directly binding to LINQ and the LinqDataSource tag, is that now we need to write a little in the code-behind to bind the data. Specifically, you'll need to add the following code to your web application's Page_Load function. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { // Binding directly to a LINQ business object. lstWeapons.DataSource = CommonManager.GetWeapons(); lstWeapons.DataBind(); } } Assuming you have a separate manager tier for performing business tasks (outside of the user interface), we'll call this the CommonManager, you would define the following function for GetWeapons(). public class CommonManager { public static List<Weapon> GetWeapons() { using (DataContext context = new DataContext()) { return context.Weapons.ToList(); } } } Notice in the GetWeapons() function, all we're really doing is calling the ToList() function on the Weapons table in LINQ. The context object, DataContext, is the name of the LINQ generated context, which contains our tables. This was added by LINQ to SQL during the generation process. Since the ToList() function returns a bindable collection, we can bind directly to the DataSource of the control and have the data display properly. What's Good About Direct Binding to LINQ? Unlike the simple method, we're actually writing some code-behind to pull the data from LINQ and display it in the control. This allows us more control over how the data is displayed, processed, and handled. For instance, we could easily include some pre-processing in the GetWeapons() function to insert a blank option before the data (if you wanted the DropDownList to show a blank item initially), or other data manipulation. We're also preserving a data tier in our architecture, namely the Business Logic tier. Instead of having the LINQ access code directly in the ASPX file, we've moved the logic into the second tier, leaving UI functionality strictly to the ASPX code-behind, and the LINQ play hidden in the tier. What's Bad About Direct Binding to LINQ? While we're getting closer to an enterprise architecture, we still only have two tiers (not including LINQ's own tier), and we're not actually using a business object. Instead, we're using LINQ's generated business objects and binding directly to these. In the case where you need more code reuse, such as the ability to pass around the business object, binding directly to LINQ prevents you from gaining this. You could certainly pass the LINQ object throughout your layers, but it may come with a price. The LINQ generated objects tend to be quite heavy, carrying connections and data manipulation functions with them. We would also need to define a GetObjects() function for every table we need to return to populate a DropDownList control. Yet Another Way for the Enterprise in All of Us The third way for binding a DropDownList to a LINQ to SQL table is through the use of custom light-weight business objects and reflection. There is a specific distinction between a light-weight business object and LINQ's generated business objects. The light-weight object is custom made by the developer and it models the database table. However, it only contains the properties and business methods required for the application. The connections, connectivity functions, and other generated properties are left in LINQ's objects. While this method requires a little more coding to achieve, you'll find that the flexibility you gain with code reuse can be well worth the time. Binding through a light-weight business object <asp:DropDownList Notice our ASP .NET tag is very similar to the above examples. We're again using a basic DropDownList control, but this time we've actually left out the DataTextField and DataValueField tags. This is because we will actually programmtically set these in our code, in an automated fashion. The UI developer can simply ignore the ID and Value tags and continue on creating hundreds of DropDownList controls with ease. The only additional property we add to the control is a custom property called AssemblyName. This property points to the fully qualified name of the light-weight business object, followed by the assembly name. That is, <asp:DropDownList If your web application will contain many data source controls, you can see how this tag can greatly increase your ability to swap in and out controls, change bindings, and forget about memorizing ID and Value field names in the database. But, we'll need some code to make this work. The Magic Starts with an Interface To put together our enterprise-style LINQ data binding with light-weight business objects, we're going to need a basic interface that our business objects, which will bind to a data source control, will implement. public interface IDataBindable { List<NameValueType> ToList(); } [Serializable] public class NameValueType { public string Name { get; set; } public string Value { get; set; } public NameValueType() { } public NameValueType(string name, string value) { Name = name; Value = value; } } What we've done is defined the interface to contain a single function ToList() which returns a list of NameValueTypes. The NameValueType class itself, is just a basic holder for an ID and Text field to display in the DropDownList. By having our business objects implement the ToList() function, we know that no matter what kind of business object we may be dealing with (any table in LINQ), we can always call ToList() to retrieve its list of data. We'll also know that the ID and Text fields will always be Name and Value, as defined in the NameValueType. With this infomation, we can move on to using Reflection to create a generic way of binding our business objects to DropDownList controls, or really, any ListControl (the base class for DropDownList) for that matter. Mixing in Our Light-Weight Business Object With our interface defined, we can now create a light-weight business object. This class will be similar to LINQ to SQL's generated table class, but it will leave out the heavier connections and data manipulation functionality. We'll gain an object that we can easily re-use throughout the web application and even share with other applications through web services. [Serializable] public class MonsterType : IDataBindable { public int MonsterId; public string Name; public int HP; public string Description; public MonsterType() { } public MonsterType(string name, int hp, string description) { Name = name; HP = hp; Description = description; } public MonsterType(int monsterId, string name, int hp, string description) : this(name, hp, description) { MonsterId = monsterId; } #region IDataBindable Members public List<NameValueType> ToList() { List<NameValueType> resultList = new List<NameValueType>(); using (DataContext context = new DataContext()) { List<Monster> itemList = context.Monsters.ToList(); foreach (Monster item in itemList) { resultList.Add(new NameValueType(item.MonsterId.ToString(), item.Name)); } } return resultList; } #endregion } Notice we simply define the database field names as properties of the business object. We have a few constructors for convenience. The important part of this object is where it implements IDataBindable and provides a body for the ToList() method. Notice in this method, we implement LINQ's DataContext and call the LINQ to SQL table's ToList() method to return a list of data items from the database. The list returned is in LINQ's generated business object format, so we convert the results into a NameValueType object. This is how we can convert LINQ's own generated class into a simple ID/Value type that our DropDownList can bind to. A Touch of Reflection Goes a Long Way The next ingredient is a function in our business logic tier which will take the AssemblyName property from our DropDownList control and call the ToList() method of the light-weight business object defined in the control's tag. public class CommonManager { public static void PopulateListControl(ListControl lstControl) { string assemblyName = lstControl.Attributes["AssemblyName"]; // Find the class Type obj = Type.GetType(assemblyName); // Get it's constructor ConstructorInfo constructor = obj.GetConstructor(new Type[] { }); // Invoke it's constructor, which returns an instance. IDataBindable createdObject = (IDataBindable)constructor.Invoke(null); // Call the interface's ToList() method and bind to the control's data source. lstControl.DataValueField = "Name"; lstControl.DataTextField = "Value"; lstControl.DataSource = createdObject.ToList(); lstControl.DataBind(); } } The magic here lies in reflection. We obtain the assembly name from the custom attribute tag that we've added to the control in the ASPX file. With this string, we can create the Type and invoke its constructor to get a physical object. We already know that the object implements IDataBindable (because you coded your light-weight business object that way), so we can cast it and call it's ToList() function. We don't have to worry about the details of LINQ to SQL or how the data is actually fetched. All we care is that we receive a list of items with a Name property and a Value property (kindly provided by the NameValueType object). From there, it's a simple matter to set the field names for the control and bind. We can glue this all together in the Page_Load function by calling the PopulateListControl function on our control. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { // Binding with a light-weight business object. CommonManager.PopulateListControl(lstMonsters); } } What's Good About the Business Object Method? By using custom light-weight business objects, we've created a basic architecture for promoting code re-use and sharing the objects. We can easily compose, enhance, and extend new objects based upon existing ones. We can also implement a variety of design patterns using the objects. We've also hidden away the details of LINQ to SQL in the data-types tier, so that our user interface and business logic layer never know the details of where the data comes from, nor do they need to reference LINQ. They simply rely on the interface to return the data in the expected format. This can make the creation of many DropDownList controls very easy in the user interface (provided a little elbow grease is used while creating the business objects). The user interface also can ignore the details of determining what ID and Value columns to read from in the database, since this is already handled by our own business objects. What's Bad About the Business Object Method? The obvious point is that it required more code. In our simple method, we only needed a single line added to our ASPX file. Now we need to create business objects for each different DropDownList data table, which obviously takes more time. Another point to take into consideration is performace and your architecture. Since we're using reflection, there is an increase in performance, although this is minimal (it can also be minimized even further by caching the list data). Conclusion We've just seen three different ways for binding data to a DropDownList control, ranging from simple, to direct, to advanced. The simple method was easy to use and implement, but lacked code re-use and extensibility. The direct method brought us a little closer to extensions, but lacked true composition; it also tended to violate our data tiers. The advanced method divided our tiers evenly, keeping the user interface completely separate from LINQ's data, but required a little more code to get there. While each method for binding LINQ to SQL tables to a data source control carries its own pros and cons, your web applications architecture will ultimately define which one is right for you. Take a look at your own design and see how you can best suit your .NET application's needs. About the Author This article was written by Kory Becker, founder and chief developer of Primary Objects, a software and web application development company. You can contact Primary Objects regarding your software development needs at
http://www.primaryobjects.com/CMS/Article95.aspx
crawl-002
refinedweb
2,631
53
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards This section describes how to use xpressive to accomplish text manipulation and parsing tasks. If you are looking for detailed information regarding specific components in xpressive, check the Reference section. xpressive is a regular expression template library. Regular expressions (regexes) can be written as strings that are parsed dynamically at runtime (dynamic regexes), or as expression templates[7] that are parsed at compile-time (static regexes). Dynamic regexes have the advantage that they can be accepted from the user as input at runtime or read from an initialization file. Static regexes have several advantages. Since they are C++ expressions instead of strings, they can be syntax-checked at compile-time. Also, they can naturally refer to code and data elsewhere in your program, giving you the ability to call back into your code from within a regex match. Finally, since they are statically bound, the compiler can generate faster code for static regexes. xpressive's dual nature is unique and powerful. Static xpressive is a bit like the Spirit Parser Framework. Like Spirit, you can build grammars with static regexes using expression templates. (Unlike Spirit, xpressive does exhaustive backtracking, trying every possibility to find a match for your pattern.) Dynamic xpressive is a bit like Boost.Regex. In fact, xpressive's interface should be familiar to anyone who has used Boost.Regex. xpressive's innovation comes from allowing you to mix and match static and dynamic regexes in the same program, and even in the same expression! You can embed a dynamic regex in a static regex, or vice versa, and the embedded regex will participate fully in the search, back-tracking as needed to make the match succeed. Enough theory. Let's have a look at Hello World, xpressive style: The first thing you'll notice about the code is that all the types in xpressive live in the boost::xpressive namespace. Next, you'll notice the type of the regular expression object is sregex. If you are familiar with Boost.Regex, this is different than what you are used to. The " s" in " sregex" stands for " string", indicating that this regex can be used to find patterns in std::string objects. I'll discuss this difference and its implications in detail later. Notice how the regex object is initialized: sregex rex = sregex::compile( "(\\w+) (\\w+)!" ); To create a regular expression object from a string, you must call a factory method such as . This is another area in which xpressive differs from other object-oriented regular expression libraries. Other libraries encourage you to think of a regular expression as a kind of string on steroids. In xpressive, regular expressions are not strings; they are little programs in a domain-specific language. Strings are only one representation of that language. Another representation is an expression template. For example, the above line of code is equivalent to the following: basic_regex<>::compile() sregex rex = (s1= +_w) >> ' ' >> (s2= +_w) >> '!'; This describes the same regular expression, except it uses the domain-specific embedded language defined by static xpressive. As you can see, static regexes have a syntax that is noticeably different than standard Perl syntax. That is because we are constrained by C++'s syntax. The biggest difference is the use of >> to mean "followed by". For instance, in Perl you can just put sub-expressions next to each other: abc But in C++, there must be an operator separating sub-expressions: a >> b >> c In Perl, parentheses () have special meaning. They group, but as a side-effect they also create back-references like $1 and $2. In C++, there is no way to overload parentheses to give them side-effects. To get the same effect, we use the special s1, s2, etc. tokens. Assign to one to create a back-reference (known as a sub-match in xpressive). You'll also notice that the one-or-more repetition operator + has moved from postfix to prefix position. That's because C++ doesn't have a postfix + operator. So: "\\w+" is the same as: +_w We'll cover all the other differences later. There are two ways to get xpressive. The first and simplest is to download the latest version of Boost. Just go to and follow the “Download” link. The second way is by directly accessing the Boost Subversion repository. Just go to and follow the instructions there for anonymous Subversion access. The version in Boost Subversion is unstable. Xpressive is a header-only template library, which means you don't need to alter your build scripts or link to any separate lib file to use it. All you need to do is #include <boost/xpressive/xpressive.hpp>. If you are only using static regexes, you can improve compile times by only including xpressive_static.hpp. Likewise, you can include xpressive_dynamic.hpp if you only plan on using dynamic regexes. If you would also like to use semantic actions or custom assertions with your static regexes, you will need to additionally include regex_actions.hpp. Xpressive requires Boost version 1.34.1 or higher. Currently, Boost.Xpressive is known to work on the following compilers: Check the latest tests results at Boost's Regression Results Page. You don't need to know much to start being productive with xpressive. Let's begin with the nickel tour of the types and algorithms xpressive provides. Now that you know a bit about the tools xpressive provides, you can pick the right tool for you by answering the following two questions: Most of the classes in xpressive are templates that are parameterized on the iterator type. xpressive defines some common typedefs to make the job of choosing the right types easier. You can use the table below to find the right types based on the type of your iterator. You should notice the systematic naming convention. Many of these types are used together, so the naming convention helps you to use them consistently. For instance, if you have a sregex, you should also be using a smatch. If you are not using one of those four iterator types, then you can use the templates directly and specify your iterator type. Do you want to find a pattern once? Many times? Search and replace? xpressive has tools for all that and more. Below is a quick reference: These algorithms and classes are described in excruciating detail in the Reference section. When using xpressive, the first thing you'll do is create a object. This section goes over the nuts and bolts of building a regular expression in the two dialects xpressive supports: static and dynamic. basic_regex<> The feature that really sets xpressive apart from other C/C++ regular expression libraries is the ability to author a regular expression using C++ expressions. xpressive achieves this through operator overloading, using a technique called expression templates to embed a mini-language dedicated to pattern matching within C++. These "static regexes" have many advantages over their string-based brethren. In particular, static regexes: Since we compose static regexes using C++ expressions, we are constrained by the rules for legal C++ expressions. Unfortunately, that means that "classic" regular expression syntax cannot always be mapped cleanly into C++. Rather, we map the regex constructs, picking new syntax that is legal C++. You create a static regex by assigning one to an object of type . For instance, the following defines a regex that can be used to find patterns in objects of type basic_regex<> std::string: sregex> +_d >> '.' >> _d >> _d; Assignment works similarly. In static regexes, character and string literals match themselves. For instance, in the regex above, '$' and '.' match the characters '$' and '.' respectively. Don't be confused by the fact that $ and . are meta-characters in Perl. In xpressive, literals always represent themselves. When using literals in static regexes, you must take care that at least one operand is not a literal. For instance, the following are not valid regexes: sregex> 'b'; // ERROR! sregex re2 = +'a'; // ERROR! The two operands to the binary >> operator are both literals, and the operand of the unary + operator is also a literal, so these statements will call the native C++ binary right-shift and unary plus operators, respectively. That's not what we want. To get operator overloading to kick in, at least one operand must be a user-defined type. We can use xpressive's as_xpr() helper function to "taint" an expression with regex-ness, forcing operator overloading to find the correct operators. The two regexes above should be written as: sregex re1 = as_xpr('a') >> 'b'; // OK sregex re2 = +as_xpr('a'); // OK As you've probably already noticed, sub-expressions in static regexes must be separated by the sequencing operator, >>. You can read this operator as "followed by". // Match an 'a' followed by a digit sregex> _d; Alternation works just as it does in Perl with the | operator. You can read this operator as "or". For example: // match a digit character or a word character one or more times sregex re = +( _d | _w ); In Perl, parentheses () have special meaning. They group, but as a side-effect they also create back-references like $1 and $2. In C++, parentheses only group -- there is no way to give them side-effects. To get the same effect, we use the special s1, s2, etc. tokens. Assigning to one creates a back-reference. You can then use the back-reference later in your expression, like using \1 and \2 in Perl. For example, consider the following regex, which finds matching HTML tags: "<(\\w+)>.*?</\\1>" In static xpressive, this would be: '<' >> (s1= +_w) >> '>' >> -*_ >> "</" >> s1 >> '>' Notice how you capture a back-reference by assigning to s1, and then you use s1 later in the pattern to find the matching end tag. Perl lets you make part of your regular expression case-insensitive by using the (?i:) pattern modifier. xpressive also has a case-insensitivity pattern modifier, called icase. You can use it as follows: sregex> icase( "that" ); In this regular expression, "this" will be matched exactly, but "that" will be matched irrespective of case. Case-insensitive regular expressions raise the issue of internationalization: how should case-insensitive character comparisons be evaluated? Also, many character classes are locale-specific. Which characters are matched by digit and which are matched by alpha? The answer depends on the std::locale object the regular expression object is using. By default, all regular expression objects use the global locale. You can override the default by using the imbue() pattern modifier, as follows: std::locale my_locale = /* initialize a std::locale object */; sregex re = imbue( my_locale )( +alpha >> +digit ); This regular expression will evaluate alpha and digit according to my_locale. See the section on Localization and Regex Traits for more information about how to customize the behavior of your regexes. The table below lists the familiar regex constructs and their equivalents in static xpressive. Static regexes are dandy, but sometimes you need something a bit more ... dynamic. Imagine you are developing a text editor with a regex search/replace feature. You need to accept a regular expression from the end user as input at run-time. There should be a way to parse a string into a regular expression. That's what xpressive's dynamic regexes are for. They are built from the same core components as their static counterparts, but they are late-bound so you can specify them at run-time. There are two ways to create a dynamic regex: with the function or with the basic_regex<>::compile() class template. Use regex_compiler<> if you want the default locale. Use basic_regex<>::compile() if you need to specify a different locale. In the section on regex grammars, we'll see another use for regex_compiler<> . regex_compiler<> Here is an example of using basic_regex<>::compile(): sregex re = sregex::compile( "this|that", regex_constants::icase ); Here is the same example using : regex_compiler<> sregex_compiler compiler; sregex re = compiler.compile( "this|that", regex_constants::icase ); is implemented in terms of basic_regex<>::compile() . regex_compiler<> Since the dynamic syntax is not constrained by the rules for valid C++ expressions, we are free to use familiar syntax for dynamic regexes. For this reason, the syntax used by xpressive for dynamic regexes follows the lead set by John Maddock's proposal to add regular expressions to the Standard Library. It is essentially the syntax standardized by ECMAScript, with minor changes in support of internationalization. Since the syntax is documented exhaustively elsewhere, I will simply refer you to the existing standards, rather than duplicate the specification here. As with static regexes, dynamic regexes support internationalization by allowing you to specify a different std::locale. To do this, you must use . The regex_compiler<> class has an regex_compiler<> imbue() function. After you have imbued a object with a custom regex_compiler<> std::locale, all regex objects compiled by that will use that locale. For example: regex_compiler<> std::locale my_locale = /* initialize your locale object here */; sregex_compiler compiler; compiler.imbue( my_locale ); sregex re = compiler.compile( "\\w+|\\d+" ); This regex will use my_locale when evaluating the intrinsic character sets "\\w" and "\\d". Once you have created a regex object, you can use the and regex_match() algorithms to find patterns in strings. This page covers the basics of regex matching and searching. In all cases, if you are familiar with how regex_search() and regex_match() in the Boost.Regex library work, xpressive's versions work the same way. regex_search() The algorithm checks to see if a regex matches a given input. regex_match() The input can be a bidirectional range such as std::string, a C-style null-terminated string or a pair of iterators. In all cases, the type of the iterator used to traverse the input sequence must match the iterator type used to declare the regex object. (You can use the table in the Quick Start to find the correct regex type for your iterator.) cregex cre = +_w; // this regex can match C-style strings sregex sre = +_w; // this regex can match std::strings if( regex_match( "hello", cre ) ) // OK { /*...*/ } if( regex_match( std::string("hello"), sre ) ) // OK { /*...*/ } if( regex_match( "hello", sre ) ) // ERROR! iterator mis-match! { /*...*/ } The algorithm optionally accepts a regex_match() struct as an out parameter. If given, the match_results<> algorithm fills in the regex_match() struct with information about which parts of the regex matched which parts of the input. match_results<> cmatch what; cregex cre = +(s1= _w); // store the results of the regex_match in "what" if( regex_match( "hello", what, cre ) ) { std::cout << what[1] << '\n'; // prints "o" } The algorithm also optionally accepts a regex_match() bitmask. With match_flag_type , you can control certain aspects of how the match is evaluated. See the match_flag_type reference for a complete list of the flags and their meanings. match_flag_type std::string str("hello"); sregex sre = bol >> +_w; // match_not_bol means that "bol" should not match at [begin,begin) if( regex_match( str.begin(), str.end(), sre, regex_constants::match_not_bol ) ) { // should never get here!!! } to see a complete example program that shows how to use . And check the regex_match() reference to see a complete list of the available overloads. regex_match() Use when you want to know if an input sequence contains a sub-sequence that a regex matches. regex_search() will try to match the regex at the beginning of the input sequence and scan forward in the sequence until it either finds a match or exhausts the sequence. regex_search() In all other regards, behaves like regex_search() (see above). In particular, it can operate on a bidirectional range such as regex_match() std::string, C-style null-terminated strings or iterator ranges. The same care must be taken to ensure that the iterator type of your regex matches the iterator type of your input sequence. As with , you can optionally provide a regex_match() struct to receive the results of the search, and a match_results<> bitmask to control how the match is evaluated. match_flag_type to see a complete example program that shows how to use . And check the regex_search() reference to see a complete list of the available overloads. regex_search() Sometimes, it is not enough to know simply whether a or regex_match() was successful or not. If you pass an object of type regex_search() to match_results<> or regex_match() , then after the algorithm has completed successfully the regex_search() will contain extra information about which parts of the regex matched which parts of the sequence. In Perl, these sub-sequences are called back-references, and they are stored in the variables match_results<> $1, $2, etc. In xpressive, they are objects of type , and they are stored in the sub_match<> structure, which acts as a vector of match_results<> objects. sub_match<> So, you've passed a object to a regex algorithm, and the algorithm has succeeded. Now you want to examine the results. Most of what you'll be doing with the match_results<> object is indexing into it to access its internally stored match_results<> objects, but there are a few other things you can do with a sub_match<> object besides. match_results<> The table below shows how to access the information stored in a object named match_results<> what. There is more you can do with the object, but that will be covered when we talk about Grammars and Nested Matches. match_results<> When you index into a object, you get back a match_results<> object. A sub_match<> is basically a pair of iterators. It is defined like this: sub_match<> template< class BidirectionalIterator > struct sub_match : std::pair< BidirectionalIterator, BidirectionalIterator > { bool matched; // ... }; Since it inherits publicaly from std::pair<>, has sub_match<> first and second data members of type BidirectionalIterator. These are the beginning and end of the sub-sequence this represents. sub_match<> also has a Boolean sub_match<> matched data member, which is true if this participated in the full match. sub_match<> The following table shows how you might access the information stored in a object called sub_match<> sub. Results are stored as iterators into the input sequence. Anything which invalidates the input sequence will invalidate the match results. For instance, if you match a std::string object, the results are only valid until your next call to a non-const member function of that std::string object. After that, the results held by the object are invalid. Don't use them! match_results<> Regular expressions are not only good for searching text; they're good at manipulating it. And one of the most common text manipulation tasks is search-and-replace. xpressive provides the algorithm for searching and replacing. regex_replace() Performing search-and-replace using is simple. All you need is an input sequence, a regex object, and a format string or a formatter object. There are several versions of the regex_replace() algorithm. Some accept the input sequence as a bidirectional container such as regex_replace() std::string and returns the result in a new container of the same type. Others accept the input as a null terminated string and return a std::string. Still others accept the input sequence as a pair of iterators and writes the result into an output iterator. The substitution may be specified as a string with format sequences or as a formatter object. Below are some simple examples of using string-based substitutions. std::string input("This is his face"); sregex re = as_xpr("his"); // find all occurrences of "his" ... std::string format("her"); // ... and replace them with "her" // use the version of regex_replace() that operates on strings std::string output = regex_replace( input, re, format ); std::cout << output << '\n'; // use the version of regex_replace() that operates on iterators std::ostream_iterator< char > out_iter( std::cout ); regex_replace( out_iter, input.begin(), input.end(), re, format ); The above program prints out the following: Ther is her face Ther is her face Notice that all the occurrences of "his" have been replaced with "her". to see a complete example program that shows how to use . And check the regex_replace() reference to see a complete list of the available overloads. regex_replace() The algorithm takes an optional bitmask parameter to control the formatting. The possible values of the bitmask are: regex_replace() These flags live in the xpressive::regex_constants namespace. If the substitution parameter is a function object instead of a string, the flags format_literal, format_perl, format_sed, and format_all are ignored. When you haven't specified a substitution string dialect with one of the format flags above, you get the dialect defined by ECMA-262, the standard for ECMAScript. The table below shows the escape sequences recognized in ECMA-262 mode. Any other sequence beginning with '$' simply represents itself. For example, if the format string were "$a" then "$a" would be inserted into the output sequence. When specifying the format_sed flag to , the following escape sequences are recognized: regex_replace() When specifying the format_perl flag to , the following escape sequences are recognized: regex_replace() When specifying the format_all flag to , the escape sequences recognized are the same as those above for regex_replace() format_perl. In addition, conditional expressions of the following form are recognized: ?Ntrue-expression:false-expression where N is a decimal digit representing a sub-match. If the corresponding sub-match participated in the full match, then the substitution is true-expression. Otherwise, it is false-expression. In this mode, you can use parens () for grouping. If you want a literal paren, you must escape it as \(. Format strings are not always expressive enough for all your text substitution needs. Consider the simple example of wanting to map input strings to output strings, as you may want to do with environment variables. Rather than a format string, for this you would use a formatter object. Consider the following code, which finds embedded environment variables of the form "$(XYZ)" and computes the substitution string by looking up the environment variable in a map. #include <map> #include <string> #include <iostream> #include <boost/xpressive/xpressive.hpp> using namespace boost; using namespace xpressive; std::map<std::string, std::string> env; std::string const &format_fun(smatch const &what) { return env[what[1].str()]; } int main() { env["X"] = "this"; env["Y"] = "that"; std::string input("\"$(X)\" has the value \"$(Y)\""); // replace strings like "$(XYZ)" with the result of env["XYZ"] sregex> (s1 = +_w) >> ')'; std::string output = regex_replace(input, envar, format_fun); std::cout << output << std::endl; } In this case, we use a function, format_fun() to compute the substitution string on the fly. It accepts a object which contains the results of the current match. match_results<> format_fun() uses the first submatch as a key into the global env map. The above code displays: "this" has the value "that" The formatter need not be an ordinary function. It may be an object of class type. And rather than return a string, it may accept an output iterator into which it writes the substitution. Consider the following, which is functionally equivalent to the above. #include <map> #include <string> #include <iostream> #include <boost/xpressive/xpressive.hpp> using namespace boost; using namespace xpressive; struct formatter { typedef std::map<std::string, std::string> env_map; env_map env; template<typename Out> Out operator()(smatch const &what, Out out) const { env_map::const_iterator where = env.find(what[1]); if(where != env.end()) { std::string const &sub = where->second; out = std::copy(sub.begin(), sub.end(), out); } return out; } }; int main() { formatter fmt; fmt.env["X"] = "this"; fmt.env["Y"] = "that"; std::string input("\"$(X)\" has the value \"$(Y)\""); sregex> (s1 = +_w) >> ')'; std::string output = regex_replace(input, envar, fmt); std::cout << output << std::endl; } The formatter must be a callable object -- a function or a function object -- that has one of three possible signatures, detailed in the table below. For the table, fmt is a function pointer or function object, what is a object, match_results<> out is an OutputIterator, and flags is a value of regex_constants::match_flag_type: In addition to format strings and formatter objects, also accepts formatter expressions. A formatter expression is a lambda expression that generates a string. It uses the same syntax as that for Semantic Actions, which are covered later. The above example, which uses regex_replace() to substitute strings for environment variables, is repeated here using a formatter expression. regex_replace() #include <map> #include <string> #include <iostream> #include <boost/xpressive/xpressive.hpp> #include <boost/xpressive/regex_actions.hpp> using namespace boost::xpressive; int main() { std::map<std::string, std::string> env; env["X"] = "this"; env["Y"] = "that"; std::string input("\"$(X)\" has the value \"$(Y)\""); sregex> (s1 = +_w) >> ')'; std::string output = regex_replace(input, envar, ref(env)[s1]); std::cout << output << std::endl; } In the above, the formatter expression is ref(env)[s1]. This means to use the value of the first submatch, s1, as a key into the env map. The purpose of xpressive::ref() here is to make the reference to the env local variable lazy so that the index operation is deferred until we know what to replace s1 with. is the Ginsu knife of the text manipulation world. It slices! It dices! This section describes how to use the highly-configurable regex_token_iterator<> to chop up input sequences. regex_token_iterator<> You initialize a with an input sequence, a regex, and some optional configuration parameters. The regex_token_iterator<> will use regex_token_iterator<> to find the first place in the sequence that the regex matches. When dereferenced, the regex_search() returns a token in the form of a regex_token_iterator<> std::basic_string<>. Which string it returns depends on the configuration parameters. By default it returns a string corresponding to the full match, but it could also return a string corresponding to a particular marked sub-expression, or even the part of the sequence that didn't match. When you increment the , it will move to the next token. Which token is next depends on the configuration parameters. It could simply be a different marked sub-expression in the current match, or it could be part or all of the next match. Or it could be the part that didn't match. regex_token_iterator<> As you can see, can do a lot. That makes it hard to describe, but some examples should make it clear. regex_token_iterator<> This example uses to chop a sequence into a series of tokens consisting of words. regex_token_iterator<> std::string input("This is his face"); sregex re = +_w; // find a word // iterate over all the words in the input sregex_token_iterator begin( input.begin(), input.end(), re ), end; // write all the words to std::cout std::ostream_iterator< std::string > out_iter( std::cout, "\n" ); std::copy( begin, end, out_iter ); This program displays the following: This is his face This example also uses to chop a sequence into a series of tokens consisting of words, but it uses the regex as a delimiter. When we pass a regex_token_iterator<> -1 as the last parameter to the constructor, it instructs the token iterator to consider as tokens those parts of the input that didn't match the regex. regex_token_iterator<> std::string input("This is his face"); sregex re = +_s; // find white space // iterate over all non-white space in the input. Note the -1 below: sregex_token_iterator begin( input.begin(), input.end(), re, -1 ), end; // write all the words to std::cout std::ostream_iterator< std::string > out_iter( std::cout, "\n" ); std::copy( begin, end, out_iter ); This program displays the following: This is his face This example also uses to chop a sequence containing a bunch of dates into a series of tokens consisting of just the years. When we pass a positive integer regex_token_iterator<> N as the last parameter to the constructor, it instructs the token iterator to consider as tokens only the regex_token_iterator<> N-th marked sub-expression of each match. std::string input("01/02/2003 blahblah 04/23/1999 blahblah 11/13/1981"); sregex re = sregex::compile("(\\d{2})/(\\d{2})/(\\d{4})"); // find a date // iterate over all the years in the input. Note the 3 below, corresponding to the 3rd sub-expression: sregex_token_iterator begin( input.begin(), input.end(), re, 3 ), end; // write all the words to std::cout std::ostream_iterator< std::string > out_iter( std::cout, "\n" ); std::copy( begin, end, out_iter ); This program displays the following: 2003 1999 1981 This example is like the previous one, except that instead of tokenizing just the years, this program turns the days, months and years into tokens. When we pass an array of integers {I,J,...} as the last parameter to the constructor, it instructs the token iterator to consider as tokens the regex_token_iterator<> I-th, J-th, etc. marked sub-expression of each match. std::string input("01/02/2003 blahblah 04/23/1999 blahblah 11/13/1981"); sregex re = sregex::compile("(\\d{2})/(\\d{2})/(\\d{4})"); // find a date // iterate over the days, months and years in the input int const sub_matches[] = { 2, 1, 3 }; // day, month, year sregex_token_iterator begin( input.begin(), input.end(), re, sub_matches ), end; // write all the words to std::cout std::ostream_iterator< std::string > out_iter( std::cout, "\n" ); std::copy( begin, end, out_iter ); This program displays the following: 02 01 2003 23 04 1999 13 11 1981 The sub_matches array instructs the to first take the value of the 2nd sub-match, then the 1st sub-match, and finally the 3rd. Incrementing the iterator again instructs it to use regex_token_iterator<> again to find the next match. At that point, the process repeats -- the token iterator takes the value of the 2nd sub-match, then the 1st, et cetera. regex_search() For complicated regular expressions, dealing with numbered captures can be a pain. Counting left parentheses to figure out which capture to reference is no fun. Less fun is the fact that merely editing a regular expression could cause a capture to be assigned a new number, invaliding code that refers back to it by the old number. Other regular expression engines solve this problem with a feature called named captures. This feature allows you to assign a name to a capture, and to refer back to the capture by name rather by number. Xpressive also supports named captures, both in dynamic and in static regexes. For dynamic regular expressions, xpressive follows the lead of other popular regex engines with the syntax of named captures. You can create a named capture with "(?P<xxx>...)" and refer back to that capture with "(?P=xxx)". Here, for instance, is a regular expression that creates a named capture and refers back to it: // Create a named capture called "char" that matches a single // character and refer back to that capture by name. sregex rx = sregex::compile("(?P<char>.)(?P=char)"); The effect of the above regular expression is to find the first doubled character. Once you have executed a match or search operation using a regex with named captures, you can access the named capture through the object using the capture's name. match_results<> std::string str("tweet"); sregex rx = sregex::compile("(?P<char>.)(?P=char)"); smatch what; if(regex_search(str, what, rx)) { std::cout << "char = " << what["char"] << std::endl; } The above code displays: char = e You can also refer back to a named capture from within a substitution string. The syntax for that is "\\g<xxx>". Below is some code that demonstrates how to use named captures when doing string substitution. std::string str("tweet"); sregex rx = sregex::compile("(?P<char>.)(?P=char)"); str = regex_replace(str, rx, "**\\g<char>**", regex_constants::format_perl); std::cout << str << std::endl; Notice that you have to specify format_perl when using named captures. Only the perl syntax recognizes the "\\g<xxx>" syntax. The above code displays: tw**e**t If you're using static regular expressions, creating and using named captures is even easier. You can use the type to create a variable that you can use like mark_tag s1, s2 and friends, but with a name that is more meaningful. Below is how the above example would look using static regexes: mark_tag char_(1); // char_ is now a synonym for s1 sregex rx = (char_= _) >> char_; After a match operation, you can use the mark_tag to index into the to access the named capture: match_results<> std::string str("tweet"); mark_tag char_(1); sregex rx = (char_= _) >> char_; smatch what; if(regex_search(str, what, rx)) { std::cout << what[char_] << std::endl; } The above code displays: char = e When doing string substitutions with , you can use named captures to create format expressions as below: regex_replace() std::string str("tweet"); mark_tag char_(1); sregex rx = (char_= _) >> char_; str = regex_replace(str, rx, "**" + char_ + "**"); std::cout << str << std::endl; The above code displays: tw**e**t One of the key benefits of representing regexes as C++ expressions is the ability to easily refer to other C++ code and data from within the regex. This enables programming idioms that are not possible with other regular expression libraries. Of particular note is the ability for one regex to refer to another regex, allowing you to build grammars out of regular expressions. This section describes how to embed one regex in another by value and by reference, how regex objects behave when they refer to other regexes, and how to access the tree of results after a successful parse. The object has value semantics. When a regex object appears on the right-hand side in the definition of another regex, it is as if the regex were embedded by value; that is, a copy of the nested regex is stored by the enclosing regex. The inner regex is invoked by the outer regex during pattern matching. The inner regex participates fully in the match, back-tracking as needed to make the match succeed. basic_regex<> Consider a text editor that has a regex-find feature with a whole-word option. You can implement this with xpressive as follows: find_dialog dlg; if( dialog_ok == dlg.do_modal() ) { std::string pattern = dlg.get_text(); // the pattern the user entered bool whole_word = dlg.whole_word.is_checked(); // did the user select the whole-word option? sregex re = sregex::compile( pattern ); // try to compile the pattern if( whole_word ) { // wrap the regex in begin-word / end-word assertions re = bow >> re >> eow; } // ... use re ... } Look closely at this line: // wrap the regex in begin-word / end-word assertions re = bow >> re >> eow; This line creates a new regex that embeds the old regex by value. Then, the new regex is assigned back to the original regex. Since a copy of the old regex was made on the right-hand side, this works as you might expect: the new regex has the behavior of the old regex wrapped in begin- and end-word assertions. If you want to be able to build recursive regular expressions and context-free grammars, embedding a regex by value is not enough. You need to be able to make your regular expressions self-referential. Most regular expression engines don't give you that power, but xpressive does. Consider the following code, which uses the by_ref() helper to define a recursive regular expression that matches balanced, nested parentheses: sregex parentheses; parentheses // A balanced set of parentheses ... = '(' // is an opening parenthesis ... >> // followed by ... *( // zero or more ... keep( +~(set='(',')') ) // of a bunch of things that are not parentheses ... | // or ... by_ref(parentheses) // a balanced set of parentheses ) // (ooh, recursion!) ... >> // followed by ... ')' // a closing parenthesis ; Matching balanced, nested tags is an important text processing task, and it is one that "classic" regular expressions cannot do. The by_ref() helper makes it possible. It allows one regex object to be embedded in another by reference. Since the right-hand side holds parentheses by reference, assigning the right-hand side back to parentheses creates a cycle, which will execute recursively. Once we allow self-reference in our regular expressions, the genie is out of the bottle and all manner of fun things are possible. In particular, we can now build grammars out of regular expressions. Let's have a look at the text-book grammar example: the humble calculator. sregex group, factor, term, expression;> by_ref(expression) >> ')'; factor = +_d | group; term = factor >> *(('*' >> factor) | ('/' >> factor)); expression = term >> *(('+' >> term) | ('-' >> term)); The regex expression defined above does something rather remarkable for a regular expression: it matches mathematical expressions. For example, if the input string were "foo 9*(10+3) bar", this pattern would match "9*(10+3)". It only matches well-formed mathematical expressions, where the parentheses are balanced and the infix operators have two arguments each. Don't try this with just any regular expression engine! Let's take a closer look at this regular expression grammar. Notice that it is cyclic: expression is implemented in terms of term, which is implemented in terms of factor, which is implemented in terms of group, which is implemented in terms of expression, closing the loop. In general, the way to define a cyclic grammar is to forward-declare the regex objects and embed by reference those regular expressions that have not yet been initialized. In the above grammar, there is only one place where we need to reference a regex object that has not yet been initialized: the definition of group. In that place, we use by_ref() to embed expression by reference. In all other places, it is sufficient to embed the other regex objects by value, since they have already been initialized and their values will not change. Using , you can also build grammars out of dynamic regular expressions. You do that by creating named regexes, and referring to other regexes by name. Each regex_compiler<> instance keeps a mapping from names to regexes that have been created with it. regex_compiler<> You can create a named dynamic regex by prefacing your regex with "(?$name=)", where name is the name of the regex. You can refer to a named regex from another regex with "(?$name)". The named regex does not need to exist yet at the time it is referenced in another regex, but it must exist by the time you use the regex. Below is a code fragment that uses dynamic regex grammars to implement the calculator example from above. using namespace boost::xpressive; using namespace regex_constants; sregex expr; { sregex_compiler compiler; syntax_option_type x = ignore_white_space; compiler.compile("(? $group = ) \\( (? $expr ) \\) ", x); compiler.compile("(? $factor = ) \\d+ | (? $group ) ", x); compiler.compile("(? $term = ) (? $factor )" " ( \\* (? $factor ) | / (? $factor ) )* ", x); expr = compiler.compile("(? $expr = ) (? $term )" " ( \\+ (? $term ) | - (? $term ) )* ", x); } std::string str("foo 9*(10+3) bar"); smatch what; if(regex_search(str, what, expr)) { // This prints "9*(10+3)": std::cout << what[0] << std::endl; } As with static regex grammars, nested regex invocations create nested match results (see Nested Results below). The result is a complete parse tree for string that matched. Unlike static regexes, dynamic regexes are always embedded by reference, not by value. The calculator examples above raises a number of very complicated memory-management issues. Each of the four regex objects refer to each other, some directly and some indirectly, some by value and some by reference. What if we were to return one of them from a function and let the others go out of scope? What becomes of the references? The answer is that the regex objects are internally reference counted, such that they keep their referenced regex objects alive as long as they need them. So passing a regex object by value is never a problem, even if it refers to other regex objects that have gone out of scope. Those of you who have dealt with reference counting are probably familiar with its Achilles Heel: cyclic references. If regex objects are reference counted, what happens to cycles like the one created in the calculator examples? Are they leaked? The answer is no, they are not leaked. The object has some tricky reference tracking code that ensures that even cyclic regex grammars are cleaned up when the last external reference goes away. So don't worry about it. Create cyclic grammars, pass your regex objects around and copy them all you want. It is fast and efficient and guaranteed not to leak or result in dangling references. basic_regex<> Nested regular expressions raise the issue of sub-match scoping. If both the inner and outer regex write to and read from the same sub-match vector, chaos would ensue. The inner regex would stomp on the sub-matches written by the outer regex. For example, what does this do? sregex inner = sregex::compile( "(.)\\1" ); sregex outer = (s1= _) >> inner >> s1; The author probably didn't intend for the inner regex to overwrite the sub-match written by the outer regex. The problem is particularly acute when the inner regex is accepted from the user as input. The author has no way of knowing whether the inner regex will stomp the sub-match vector or not. This is clearly not acceptable. Instead, what actually happens is that each invocation of a nested regex gets its own scope. Sub-matches belong to that scope. That is, each nested regex invocation gets its own copy of the sub-match vector to play with, so there is no way for an inner regex to stomp on the sub-matches of an outer regex. So, for example, the regex outer defined above would match "ABBA", as it should. If nested regexes have their own sub-matches, there should be a way to access them after a successful match. In fact, there is. After a or regex_match() , the regex_search() struct behaves like the head of a tree of nested results. The match_results<> class provides a match_results<> nested_results() member function that returns an ordered sequence of structures, representing the results of the nested regexes. The order of the nested results is the same as the order in which the nested regex objects matched. match_results<> Take as an example the regex for balanced, nested parentheses we saw earlier: sregex parentheses;> *( keep( +~(set='(',')') ) | by_ref(parentheses) ) >> ')'; smatch what; std::string str( "blah blah( a(b)c (c(e)f (g)h )i (j)6 )blah" ); if( regex_search( str, what, parentheses ) ) { // display the whole match std::cout << what[0] << '\n'; // display the nested results std::for_each( what.nested_results().begin(), what.nested_results().end(), output_nested_results() ); } This program displays the following: ( a(b)c (c(e)f (g)h )i (j)6 ) (b) (c(e)f (g)h ) (e) (g) (j) Here you can see how the results are nested and that they are stored in the order in which they are found. Sometimes a regex will have several nested regex objects, and you want to know which result corresponds to which regex object. That's where basic_regex<>::regex_id() and match_results<>::regex_id() come in handy. When iterating over the nested results, you can compare the regex id from the results to the id of the regex object you're interested in. To make this a bit easier, xpressive provides a predicate to make it simple to iterate over just the results that correspond to a certain nested regex. It is called regex_id_filter_predicate, and it is intended to be used with Boost.Iterator. You can use it as follows: sregex name = +alpha; sregex integer = +_d; sregex re = *( *_s >> ( name | integer ) ); smatch what; std::string str( "marsha 123 jan 456 cindy 789" ); if( regex_match( str, what, re ) ) { smatch::nested_results_type::const_iterator begin = what.nested_results().begin(); smatch::nested_results_type::const_iterator end = what.nested_results().end(); // declare filter predicates to select just the names or the integers sregex_id_filter_predicate name_id( name.regex_id() ); sregex_id_filter_predicate integer_id( integer.regex_id() ); // iterate over only the results from the name regex std::for_each( boost::make_filter_iterator( name_id, begin, end ), boost::make_filter_iterator( name_id, end, end ), output_result ); std::cout << '\n'; // iterate over only the results from the integer regex std::for_each( boost::make_filter_iterator( integer_id, begin, end ), boost::make_filter_iterator( integer_id, end, end ), output_result ); } where output_results is a simple function that takes a smatch and displays the full match. Notice how we use the regex_id_filter_predicate together with basic_regex<>::regex_id() and boost::make_filter_iterator() from the Boost.Iterator to select only those results corresponding to a particular nested regex. This program displays the following: marsha jan cindy 123 456 789 Imagine you want to parse an input string and build a std::map<> from it. For something like that, matching a regular expression isn't enough. You want to do something when parts of your regular expression match. Xpressive lets you attach semantic actions to parts of your static regular expressions. This section shows you how. Consider the following code, which uses xpressive's semantic actions to parse a string of word/integer pairs and stuffs them into a std::map<>. It is described below. #include <string> #include <iostream> #include <boost/xpressive/xpressive.hpp> #include <boost/xpressive/regex_actions.hpp> using namespace boost::xpressive; int main() { std::map<std::string, int> result; std::string str("aaa=>1 bbb=>23 ccc=>456"); // Match a word and an integer, separated by =>, // and then stuff the result into a std::map<> sregex pair = ( (s1= +_w) >> "=>" >> (s2= +_d) ) [ ref(result)[s1] = as<int>(s2) ]; // Match one or more word/integer pairs, separated // by whitespace. sregex rx = pair >> *(+_s >> pair); if(regex_match(str, rx)) { std::cout << result["aaa"] << '\n'; std::cout << result["bbb"] << '\n'; std::cout << result["ccc"] << '\n'; } return 0; } This program prints the following: 1 23 456 The regular expression pair has two parts: the pattern and the action. The pattern says to match a word, capturing it in sub-match 1, and an integer, capturing it in sub-match 2, separated by "=>". The action is the part in square brackets: [ ref(result)[s1] = as<int>(s2) ]. It says to take sub-match one and use it to index into the results map, and assign to it the result of converting sub-match 2 to an integer. How does this work? Just as the rest of the static regular expression, the part between brackets is an expression template. It encodes the action and executes it later. The expression ref(result) creates a lazy reference to the result object. The larger expression ref(result)[s1] is a lazy map index operation. Later, when this action is getting executed, s1 gets replaced with the first . Likewise, when sub_match<> as<int>(s2) gets executed, s2 is replaced with the second . The sub_match<> as<> action converts its argument to the requested type using Boost.Lexical_cast. The effect of the whole action is to insert a new word/integer pair into the map. In addition to the sub-match placeholders s1, s2, etc., you can also use the placeholder _ within an action to refer back to the string matched by the sub-expression to which the action is attached. For instance, you can use the following regex to match a bunch of digits, interpret them as an integer and assign the result to a local variable: int i = 0; // Here, _ refers back to all the // characters matched by (+_d) sregex rex = (+_d)[ ref(i) = as<int>(_) ]; What does it mean, exactly, to attach an action to part of a regular expression and perform a match? When does the action execute? If the action is part of a repeated sub-expression, does the action execute once or many times? And if the sub-expression initially matches, but ultimately fails because the rest of the regular expression fails to match, is the action executed at all? The answer is that by default, actions are executed lazily. When a sub-expression matches a string, its action is placed on a queue, along with the current values of any sub-matches to which the action refers. If the match algorithm must backtrack, actions are popped off the queue as necessary. Only after the entire regex has matched successfully are the actions actually exeucted. They are executed all at once, in the order in which they were added to the queue, as the last step before returns. regex_match() For example, consider the following regex that increments a counter whenever it finds a digit. int i = 0; std::string str("1!2!3?"); // count the exciting digits, but not the // questionable ones. sregex rex = +( _d [ ++ref(i) ] >> '!' ); regex_search(str, rex); assert( i == 2 ); The action ++ref(i) is queued three times: once for each found digit. But it is only executed twice: once for each digit that precedes a '!' character. When the '?' character is encountered, the match algorithm backtracks, removing the final action from the queue. When you want semantic actions to execute immediately, you can wrap the sub-expression containing the action in a . keep() keep() turns off back-tracking for its sub-expression, but it also causes any actions queued by the sub-expression to execute at the end of the keep(). It is as if the sub-expression in the keep() were compiled into an independent regex object, and matching the keep() is like a separate invocation of regex_search(). It matches characters and executes actions but never backtracks or unwinds. For example, imagine the above example had been written as follows: int i = 0; std::string str("1!2!3?"); // count all the digits. sregex rex = +( keep( _d [ ++ref(i) ] ) >> '!' ); regex_search(str, rex); assert( i == 3 ); We have wrapped the sub-expression _d [ ++ref(i) ] in keep(). Now, whenever this regex matches a digit, the action will be queued and then immediately executed before we try to match a '!' character. In this case, the action executes three times. So far, we've seen how to write semantic actions consisting of variables and operators. But what if you want to be able to call a function from a semantic action? Xpressive provides a mechanism to do this. The first step is to define a function object type. Here, for instance, is a function object type that calls push() on its argument: struct push_impl { // Result type, needed for tr1::result_of typedef void result_type; template<typename Sequence, typename Value> void operator()(Sequence &seq, Value const &val) const { seq.push(val); } }; The next step is to use xpressive's function<> template to define a function object named push: // Global "push" function object. function<push_impl>::type const push = {{}}; The initialization looks a bit odd, but this is because push is being statically initialized. That means it doesn't need to be constructed at runtime. We can use push in semantic actions as follows: std::stack<int> ints; // Match digits, cast them to an int // and push it on the stack. sregex rex = (+_d)[push(ref(ints), as<int>(_))]; You'll notice that doing it this way causes member function invocations to look like ordinary function invocations. You can choose to write your semantic action in a different way that makes it look a bit more like a member function call: sregex rex = (+_d)[ref(ints)->*push(as<int>(_))]; Xpressive recognizes the use of the ->* and treats this expression exactly the same as the one above. When your function object must return a type that depends on its arguments, you can use a result<> member template instead of the result_type typedef. Here, for example, is a first function object that returns the first member of a std::pair<> or : sub_match<> // Function object that returns the // first element of a pair. struct first_impl { template<typename Sig> struct result {}; template<typename This, typename Pair> struct result<This(Pair)> { typedef typename remove_reference<Pair> ::type::first_type type; }; template<typename Pair> typename Pair::first_type operator()(Pair const &p) const { return p.first; } }; // OK, use as first(s1) to get the begin iterator // of the sub-match referred to by s1. function<first_impl>::type const first = {{}}; As we've seen in the examples above, we can refer to local variables within an actions using xpressive::ref(). Any such variables are held by reference by the regular expression, and care should be taken to avoid letting those references dangle. For instance, in the following code, the reference to i is left to dangle when bad_voodoo() returns: sregex bad_voodoo() { int i = 0; sregex rex = +( _d [ ++ref(i) ] >> '!' ); // ERROR! rex refers by reference to a local // variable, which will dangle after bad_voodoo() // returns. return rex; } When writing semantic actions, it is your responsibility to make sure that all the references do not dangle. One way to do that would be to make the variables shared pointers that are held by the regex by value. sregex good_voodoo(boost::shared_ptr<int> pi) { // Use val() to hold the shared_ptr by value: sregex rex = +( _d [ ++*val(pi) ] >> '!' ); // OK, rex holds a reference count to the integer. return rex; } In the above code, we use xpressive::val() to hold the shared pointer by value. That's not normally necessary because local variables appearing in actions are held by value by default, but in this case, it is necessary. Had we written the action as ++*pi, it would have executed immediately. That's because ++*pi is not an expression template, but ++*val(pi) is. It can be tedious to wrap all your variables in ref() and val() in your semantic actions. Xpressive provides the reference<> and value<> templates to make things easier. The following table shows the equivalencies: As you can see, when using reference<>, you need to first declare a local variable and then declare a reference<> to it. These two steps can be combined into one using local<>. We can use local<> to rewrite the above example as follows: local<int> i(0); std::string str("1!2!3?"); // count the exciting digits, but not the // questionable ones. sregex rex = +( _d [ ++i ] >> '!' ); regex_search(str, rex); assert( i.get() == 2 ); Notice that we use local<>::get() to access the value of the local variable. Also, beware that local<> can be used to create a dangling reference, just as reference<> can. In the beginning of this section, we used a regex with a semantic action to parse a string of word/integer pairs and stuff them into a std::map<>. That required that the map and the regex be defined together and used before either could go out of scope. What if we wanted to define the regex once and use it to fill lots of different maps? We would rather pass the map into the algorithm rather than embed a reference to it directly in the regex object. What we can do instead is define a placeholder and use that in the semantic action instead of the map itself. Later, when we call one of the regex algorithms, we can bind the reference to an actual map object. The following code shows how. regex_match() // Define a placeholder for a map object: placeholder<std::map<std::string, int> > _map; // Match a word and an integer, separated by =>, // and then stuff the result into a std::map<> sregex pair = ( (s1= +_w) >> "=>" >> (s2= +_d) ) [ _map[s1] = as<int>(s2) ]; // Match one or more word/integer pairs, separated // by whitespace. sregex rx = pair >> *(+_s >> pair); // The string to parse std::string str("aaa=>1 bbb=>23 ccc=>456"); // Here is the actual map to fill in: std::map<std::string, int> result; // Bind the _map placeholder to the actual map smatch what; what.let( _map = result ); // Execute the match and fill in result map if(regex_match(str, what, rx)) { std::cout << result["aaa"] << '\n'; std::cout << result["bbb"] << '\n'; std::cout << result["ccc"] << '\n'; } This program displays: 1 23 456 We use placeholder<> here to define _map, which stands in for a std::map<> variable. We can use the placeholder in the semantic action as if it were a map. Then, we define a struct and bind an actual map to the placeholder with " match_results<> what.let( _map = result );". The call behaves as if the placeholder in the semantic action had been replaced with a reference to regex_match() result. The syntax for late-bound action arguments is a little different if you are using or regex_iterator<> . The regex iterators accept an extra constructor parameter for specifying the argument bindings. There is a regex_token_iterator<> let() function that you can use to bind variables to their placeholders. The following code demonstrates how. // Define a placeholder for a map object: placeholder<std::map<std::string, int> > _map; // Match a word and an integer, separated by =>, // and then stuff the result into a std::map<> sregex pair = ( (s1= +_w) >> "=>" >> (s2= +_d) ) [ _map[s1] = as<int>(s2) ]; // The string to parse std::string str("aaa=>1 bbb=>23 ccc=>456"); // Here is the actual map to fill in: std::map<std::string, int> result; // Create a regex_iterator to find all the matches sregex_iterator it(str.begin(), str.end(), pair, let(_map=result)); sregex_iterator end; // step through all the matches, and fill in // the result map while(it != end) ++it; std::cout << result["aaa"] << '\n'; std::cout << result["bbb"] << '\n'; std::cout << result["ccc"] << '\n'; This program displays: 1 23 456 You are probably already familiar with regular expression assertions. In Perl, some examples are the ^ and $ assertions, which you can use to match the beginning and end of a string, respectively. Xpressive lets you define your own assertions. A custom assertion is a contition which must be true at a point in the match in order for the match to succeed. You can check a custom assertion with xpressive's function. check() There are a couple of ways to define a custom assertion. The simplest is to use a function object. Let's say that you want to ensure that a sub-expression matches a sub-string that is either 3 or 6 characters long. The following struct defines such a predicate: // A predicate that is true IFF a sub-match is // either 3 or 6 characters long. struct three_or_six { bool operator()(ssub_match const &sub) const { return sub.length() == 3 || sub.length() == 6; } }; You can use this predicate within a regular expression as follows: // match words of 3 characters or 6 characters. sregex rx = (bow >> +_w >> eow)[ check(three_or_six()) ] ; The above regular expression will find whole words that are either 3 or 6 characters long. The three_or_six predicate accepts a that refers back to the part of the string matched by the sub-expression to which the custom assertion is attached. sub_match<> Custom assertions can also be defined inline using the same syntax as for semantic actions. Below is the same custom assertion written inline: // match words of 3 characters or 6 characters. sregex rx = (bow >> +_w >> eow)[ check(length(_)==3 || length(_)==6) ] ; In the above, length() is a lazy function that calls the length() member function of its argument, and _ is a placeholder that receives the sub_match. Once you get the hang of writing custom assertions inline, they can be very powerful. For example, you can write a regular expression that only matches valid dates (for some suitably liberal definition of the term “valid”). int const days_per_month[] = {31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 31, 31}; mark_tag month(1), day(2); // find a valid date of the form month/day/year. sregex date = ( // Month must be between 1 and 12 inclusive (month= _d >> !_d) [ check(as<int>(_) >= 1 && as<int>(_) <= 12) ] >> '/' // Day must be between 1 and 31 inclusive >> (day= _d >> !_d) [ check(as<int>(_) >= 1 && as<int>(_) <= 31) ] >> '/' // Only consider years between 1970 and 2038 >> (_d >> _d >> _d >> _d) [ check(as<int>(_) >= 1970 && as<int>(_) <= 2038) ] ) // Ensure the month actually has that many days! [ check( ref(days_per_month)[as<int>(month)-1] >= as<int>(day) ) ] ; smatch what; std::string str("99/99/9999 2/30/2006 2/28/2006"); if(regex_search(str, what, date)) { std::cout << what[0] << std::endl; } The above program prints out the following: 2/28/2006 Notice how the inline custom assertions are used to range-check the values for the month, day and year. The regular expression doesn't match "99/99/9999" or "2/30/2006" because they are not valid dates. (There is no 99th month, and February doesn't have 30 days.) Symbol tables can be built into xpressive regular expressions with just a std::map<>. The map keys are the strings to be matched and the map values are the data to be returned to your semantic action. Xpressive attributes, named a1, a2, through a9, hold the value corresponding to a matching key so that it can be used in a semantic action. A default value can be specified for an attribute if a symbol is not found. An xpressive symbol table is just a std::map<>, where the key is a string type and the value can be anything. For example, the following regular expression matches a key from map1 and assigns the corresponding value to the attribute a1. Then, in the semantic action, it assigns the value stored in attribute a1 to an integer result. int result; std::map<std::string, int> map1; // ... (fill the map) sregex rx = ( a1 = map1 ) [ ref(result) = a1 ]; Consider the following example code, which translates number names into integers. It is described below. #include <string> #include <iostream> #include <boost/xpressive/xpressive.hpp> #include <boost/xpressive/regex_actions.hpp> using namespace boost::xpressive; int main() { std::map<std::string, int> number_map; number_map["one"] = 1; number_map["two"] = 2; number_map["three"] = 3; // Match a string from number_map // and store the integer value in 'result' // if not found, store -1 in 'result' int result = 0; cregex rx = ((a1 = number_map ) | *_) [ ref(result) = (a1 | -1)]; regex_match("three", rx); std::cout << result << '\n'; regex_match("two", rx); std::cout << result << '\n'; regex_match("stuff", rx); std::cout << result << '\n'; return 0; } This program prints the following: 3 2 -1 First the program builds a number map, with number names as string keys and the corresponding integers as values. Then it constructs a static regular expression using an attribute a1 to represent the result of the symbol table lookup. In the semantic action, the attribute is assigned to an integer variable result. If the symbol was not found, a default value of -1 is assigned to result. A wildcard, *_, makes sure the regex matches even if the symbol is not found. A more complete version of this example can be found in libs/xpressive/example/numbers.cpp[8]. It translates number names up to "nine hundred ninety nine million nine hundred ninety nine thousand nine hundred ninety nine" along with some special number names like "dozen". Symbol table matches are case sensitive by default, but they can be made case-insensitive by enclosing the expression in icase(). Up to nine attributes can be used in a regular expression. They are named a1, a2, ..., a9 in the boost::xpressive namespace. The attribute type is the same as the second component of the map that is assigned to it. A default value for an attribute can be specified in a semantic action with the syntax (a1 | . default-value) Attributes are properly scoped, so you can do crazy things like: ( (a1=sym1) >> (a1=sym2)[ref(x)=a1] )[ref(y)=a1]. The inner semantic action sees the inner a1, and the outer semantic action sees the outer one. They can even have different types. Matching a regular expression against a string often requires locale-dependent information. For example, how are case-insensitive comparisons performed? The locale-sensitive behavior is captured in a traits class. xpressive provides three traits class templates: cpp_regex_traits<>, c_regex_traits<> and null_regex_traits<>. The first wraps a std::locale, the second wraps the global C locale, and the third is a stub traits type for use when searching non-character data. All traits templates conform to the Regex Traits Concept. By default, xpressive uses cpp_regex_traits<> for all patterns. This causes all regex objects to use the global std::locale. If you compile with BOOST_XPRESSIVE_USE_C_TRAITS defined, then xpressive will use c_regex_traits<> by default. To create a dynamic regex that uses a custom traits object, you must use . The basic steps are shown in the following example: regex_compiler<> // Declare a regex_compiler that uses the global C locale regex_compiler<char const *, c_regex_traits<char> > crxcomp; cregex crx = crxcomp.compile( "\\w+" ); // Declare a regex_compiler that uses a custom std::locale std::locale loc = /* ... create a locale here ... */; regex_compiler<char const *, cpp_regex_traits<char> > cpprxcomp(loc); cregex cpprx = cpprxcomp.compile( "\\w+" ); The regex_compiler objects act as regex factories. Once they have been imbued with a locale, every regex object they create will use that locale. If you want a particular static regex to use a different set of traits, you can use the special imbue() pattern modifier. For instance: // Define a regex that uses the global C locale c_regex_traits<char> ctraits; sregex crx = imbue(ctraits)( +_w ); // Define a regex that uses a customized std::locale std::locale loc = /* ... create a locale here ... */; cpp_regex_traits<char> cpptraits(loc); sregex cpprx1 = imbue(cpptraits)( +_w ); // A shorthand for above sregex cpprx2 = imbue(loc)( +_w ); The imbue() pattern modifier must wrap the entire pattern. It is an error to imbue only part of a static regex. For example: // ERROR! Cannot imbue() only part of a regex sregex error = _w >> imbue(loc)( _w ); null_regex_traits With xpressive static regexes, you are not limitted to searching for patterns in character sequences. You can search for patterns in raw bytes, integers, or anything that conforms to the Char Concept. The null_regex_traits<> makes it simple. It is a stub implementation of the Regex Traits Concept. It recognizes no character classes and does no case-sensitive mappings. For example, with null_regex_traits<>, you can write a static regex to find a pattern in a sequence of integers as follows: // some integral data to search int const data[] = {0, 1, 2, 3, 4, 5, 6}; // create a null_regex_traits<> object for searching integers ... null_regex_traits<int> nul; // imbue a regex object with the null_regex_traits ... basic_regex<int const *> rex = imbue(nul)(1 >> +((set= 2,3) | 4) >> 5); match_results<int const *> what; // search for the pattern in the array of integers ... regex_search(data, data + 7, what, rex); assert(what[0].matched); assert(*what[0].first == 1); assert(*what[0].second == 6); Squeeze the most performance out of xpressive with these tips and tricks. Compiling a regex (dynamic or static) is far more expensive than executing a match or search. If you have the option, prefer to compile a pattern into a object once and reuse it rather than recreating it over and over. basic_regex<> Since objects are not mutated by any of the regex algorithms, they are completely thread-safe once their initialization (and that of any grammars of which they are members) completes. The easiest way to reuse your patterns is to simply make your basic_regex<> objects "static const". basic_regex<> ObjectsObjects match_results<> The object caches dynamically allocated memory. For this reason, it is far better to reuse the same match_results<> object if you have to do many regex searches. match_results<> Caveat: objects are not thread-safe, so don't go wild reusing them across threads. match_results<> ObjectObject match_results<> This is a corollary to the previous tip. If you are doing multiple searches, you should prefer the regex algorithms that accept a object over the ones that don't, and you should reuse the same match_results<> object each time. If you don't provide a match_results<> object, a temporary one will be created for you and discarded when the algorithm returns. Any memory cached in the object will be deallocated and will have to be reallocated the next time. match_results<> xpressive provides overloads of the and regex_match() algorithms that operate on C-style null-terminated strings. You should prefer the overloads that take iterator ranges. When you pass a null-terminated string to a regex algorithm, the end iterator is calculated immediately by calling regex_search() strlen. If you already know the length of the string, you can avoid this overhead by calling the regex algorithms with a [begin, end) pair. On average, static regexes execute about 10 to 15% faster than their dynamic counterparts. It's worth familiarizing yourself with the static regex dialect. syntax_option_type::optimize The optimize flag tells the regex compiler to spend some extra time analyzing the pattern. It can cause some patterns to execute faster, but it increases the time to compile the pattern, and often increases the amount of memory consumed by the pattern. If you plan to reuse your pattern, optimize is usually a win. If you will only use the pattern once, don't use optimize. Keep the following tips in mind to avoid stepping in potholes with xpressive. With static regexes, you can create grammars by nesting regexes inside one another. When compiling the outer regex, both the outer and inner regex objects, and all the regex objects to which they refer either directly or indirectly, are modified. For this reason, it's dangerous for global regex objects to participate in grammars. It's best to build regex grammars from a single thread. Once built, the resulting regex grammar can be executed from multiple threads without problems. This is a pitfall common to many regular expression engines. Some patterns can cause exponentially bad performance. Often these patterns involve one quantified term nested withing another quantifier, such as "(a*)*", although in many cases, the problem is harder to spot. Beware of patterns that have nested quantifiers. If type BidiIterT is used as a template argument to , then basic_regex<> CharT is iterator_traits<BidiIterT>::value_type. Type CharT must have a trivial default constructor, copy constructor, assignment operator, and destructor. In addition the following requirements must be met for objects; c of type CharT, c1 and c2 of type CharT const, and i of type int: In the following table X denotes a traits class defining types and functions for the character container type CharT; u is an object of type X; v is an object of type const X; p is a value of type const CharT*; I1 and I2 are Input Iterators; c is a value of type const CharT; s is an object of type X::string_type; cs is an object of type const X::string_type; b is a value of type bool; i is a value of type int; F1 and F2 are values of type const CharT*; loc is an object of type X::locale_type; and ch is an object of const char. This section is adapted from the equivalent page in the Boost.Regex documentation and from the proposal to add regular expressions to the Standard Library. Below you can find six complete sample programs. This is the example from the Introduction. It is reproduced here for your convenience. Notice in this example how we use custom mark_tags to make the pattern more readable. We can use the mark_tags later to index into the . match_results<> #include <iostream> #include <boost/xpressive/xpressive.hpp> using namespace boost::xpressive; int main() { char const *str = "I was born on 5/30/1973 at 7am."; // define some custom mark_tags with names more meaningful than s1, s2, etc. mark_tag day(1), month(2), year(3), delim(4); // this regex finds a date cregex date = (month= repeat<1,2>(_d)) // find the month ... >> (delim= (set= '/','-')) // followed by a delimiter ... >> (day= repeat<1,2>(_d)) >> delim // and a day followed by the same delimiter ... >> (year= repeat<1,2>(_d >> _d)); // and the year. cmatch what; if( regex_search( str, what, date ) ) { std::cout << what[0] << '\n'; // whole match std::cout << what[day] << '\n'; // the day std::cout << what[month] << '\n'; // the month std::cout << what[year] << '\n'; // the year std::cout << what[delim] << '\n'; // the delimiter } return 0; } This program outputs the following: 5/30/1973 30 5 1973 / The following program finds dates in a string and marks them up with pseudo-HTML. #include <iostream> #include <boost/xpressive/xpressive.hpp> using namespace boost::xpressive; int main() { std::string str( "I was born on 5/30/1973 at 7am." ); // essentially the same regex as in the previous example, but using a dynamic regex sregex date = sregex::compile( "(\\d{1,2})([/-])(\\d{1,2})\\2((?:\\d{2}){1,2})" ); // As in Perl, $& is a reference to the sub-string that matched the regex std::string format( "<date>$&</date>" ); str = regex_replace( str, date, format ); std::cout << str << '\n'; return 0; } This program outputs the following: I was born on <date>5/30/1973</date> at 7am. The following program finds the words in a wide-character string. It uses wsregex_iterator. Notice that dereferencing a wsregex_iterator yields a wsmatch object. #include <iostream> #include <boost/xpressive/xpressive.hpp> using namespace boost::xpressive; int main() { std::wstring str( L"This is his face." ); // find a whole word wsregex token = +alnum; wsregex_iterator cur( str.begin(), str.end(), token ); wsregex_iterator end; for( ; cur != end; ++cur ) { wsmatch const &what = *cur; std::wcout << what[0] << L'\n'; } return 0; } This program outputs the following: This is his face The following program finds race times in a string and displays first the minutes and then the seconds. It uses . regex_token_iterator<> #include <iostream> #include <boost/xpressive/xpressive.hpp> using namespace boost::xpressive; int main() { std::string str( "Eric: 4:40, Karl: 3:35, Francesca: 2:32" ); // find a race time sregex time = sregex::compile( "(\\d):(\\d\\d)" ); // for each match, the token iterator should first take the value of // the first marked sub-expression followed by the value of the second // marked sub-expression int const subs[] = { 1, 2 }; sregex_token_iterator cur( str.begin(), str.end(), time, subs ); sregex_token_iterator end; for( ; cur != end; ++cur ) { std::cout << *cur << '\n'; } return 0; } This program outputs the following: 4 40 3 35 2 32 The following program takes some text that has been marked up with html and strips out the mark-up. It uses a regex that matches an HTML tag and a that returns the parts of the string that do not match the regex. regex_token_iterator<> #include <iostream> #include <boost/xpressive/xpressive.hpp> using namespace boost::xpressive; int main() { std::string str( "Now <bold>is the time <i>for all good men</i> to come to the aid of their</bold> country." ); // find a HTML tag sregex> optional('/') >> +_w >> '>'; // the -1 below directs the token iterator to display the parts of // the string that did NOT match the regular expression. sregex_token_iterator cur( str.begin(), str.end(), html, -1 ); sregex_token_iterator end; for( ; cur != end; ++cur ) { std::cout << '{' << *cur << '}'; } std::cout << '\n'; return 0; } This program outputs the following: {Now }{is the time }{for all good men}{ to come to the aid of their}{ country.} Here is a helper class to demonstrate how you might display a tree of nested results: // Displays nested results to std::cout with indenting struct output_nested_results { int tabs_; output_nested_results( int tabs = 0 ) : tabs_( tabs ) { } template< typename BidiIterT > void operator ()( match_results< BidiIterT > const &what ) const { // first, do some indenting typedef typename std::iterator_traits< BidiIterT >::value_type char_type; char_type space_ch = char_type(' '); std::fill_n( std::ostream_iterator<char_type>( std::cout ), tabs_ * 4, space_ch ); // output the match std::cout << what[0] << '\n'; // output any nested matches std::for_each( what.nested_results().begin(), what.nested_results().end(), output_nested_results( tabs_ + 1 ) ); } }; [7] See Expression Templates
http://www.boost.org/doc/libs/1_52_0/doc/html/xpressive/user_s_guide.html
CC-MAIN-2014-41
refinedweb
12,356
54.32
Until C# 9.0, It was all about the Main() method where program control start and ends. With C# 9.0, you don’t need to mention the Main() method or Class definition explicitly using Top-Level Statement. Then how do you pass command-line arguments for your program? especially when you are running it from Visual Studio. In this post let us have a look at how we can pass command line arguments in Visual Studio for C# 9.0 Top Level Statement. Typical C# Console Vs Top-Level Statement in C# 9.0 A typical Hello World Console Application for C# looks as below using System; namespace TopLevelStatementCsharp { class Program { static void Main(string[] args) { Console.WriteLine("Hello World!"); } } } With Top-Level Statement in C# 9.0, you can achieve the same using just following line of code. System.Console.WriteLine("Hello World!"); Which will produce the same output as previous code block. Command Line Arguments with Visual Studio You can pass the parameter for the top-level statement along with multiple arguments. System.Console.WriteLine($"Hello {args?[0]}"); In Visual Studio you can set the parameter from the debug tab as “Application Argument” . Once this is set, you can run and test it out. Read : How to Pass Command Line Arguments using Visual Studio ? Looking for mentorship: Connect 1:1 with Abhijit. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #3250 Pingback: Dew Drop – June 11, 2021 (#3463) – Morning Dew by Alvin Ashcraft
https://dailydotnettips.com/command-line-arguments-and-c-9-0-top-level-statement-visual-studio/
CC-MAIN-2021-25
refinedweb
247
67.96
Core Java Technology Section Index | Page 9 How can I traverse a List backwards? In order to traverse a List backwards a ListIterator must be used. The List interface provides a method, which returns a ListIterator. ListIterator<E> listIterator() Using a returned L...more How can I create a thread pool? The Executors class of the java.util.concurrent class offers newCachedThreadPool(), newCachedThreadPool(ThreadFactory threadFactory), newFixedThreadPool(int nThreads), newFixedThreadPool(int nThre...more How can I tell the difference between wait(timeout) timing out and actually being notified? You can't directly by using wait and notify, but the can use a similar construct with the Lock interface of the java.util.concurrent.locks package, with its boolean tryLock(long time, TimeUnit uni...more How do I use the for-each construct? Any class that implements the Iterable interface can be used in a for-each statement: for (Datatype localvar : Iterable-instance). For instance, the following will print out each command-line argu...more I have a bugfix and want to submit it to Sun. What do I do? The best you can do is sign up as a contributor to Mustang, Sun's 6.0 release of Java SE. There is information on collaborating at I've see the letter P in hexidecimal constants, like 0XfP3. What does it mean? As of JDK 5.0, the Java programming language now supports hexadecimal floating point literals. 0XfP3 = 120. The 0X prefix means what follows is hex. F = 15. P3 means 2 ^ 3 (2 raised to third power...more Sometimes when I try to override a method, I get the signature wrong. The program still compiles, but the code never runs. How can I make sure my method actually overrides the method of the superclass? The @Override annotation can be added to the javadoc for the new method. If you accidently miss an argument or capitalize the method name wrong, the compiler will generate a compile-time error.more What does autoboxing and unboxing mean? Autoboxing is not casting. Instead, it is the automatic conversion of a primitive data type to its wrapped Object type (boxing) and back (unboxing). For instance, Integer i = 3;, used to generate ...more What is the deal with the Closeable and Flushable interfaces? The ability to close() or flush() an object (typeically a stream) has been factored out into single method interfaces, Closeable and Flushable, respectively. Given that you sometimes need to opera...more What purpose does the TimeUnit class have? The TimeUnit class allows you to work with time quanities as both an amount and a unit of measurement. For instance, instead of always working with milliseconds when you want to put a thread to sl...more While I can certainly show the contents of a web page in a JEditorPane, how do I launch the desktop's browser with the new URL? The JDesktop Integration Components (JDIC), available from, offers support for this: import java.net.*; import org.jdesktop.jdic.desktop.*; public class LaunchURL { ...more Why would I want to use the Appendable interface? The Appendable interface and its append() methods are implemented in classes that support adding characters to the end of an object, typically a stream. Indirectly used through the Formatter class...more How can i tell if two Collections contain the same elements or have no elements in common? Two methods are needed in this case. boolean containsAll(Collection<?> c) boolean disjoint(Collection<?>c1 Collection<?>c2) Since containsAll(Collection<?> c) is define...more
http://www.jguru.com/faq/core-java-technology?page=9
CC-MAIN-2013-48
refinedweb
581
60.31
using System; public class Program { public static void Main() { int dataSize = int.Parse(Console.ReadLine()); double[] data = new double[dataSize]; for(int i = 0; i < dataSize; i++) { data[i] = double.Parse(Console.ReadLine()); } for(int i = 0; i < dataSize; i++) { Console.WriteLine(FindTrailingZeros(data[i])); } } public static double FindTrailingZeros(double n) { double quotient = 1; double sumOfQuotients = 0; int power = 1; while(true) { quotient = n / Math.Pow(5, power); quotient = Math.Truncate(quotient); power++; if(quotient > 0) { sumOfQuotients += quotient; } else { return sumOfQuotients; } } } } Here’s my code for the Factorial problem on CodeChef, though I think the title doesn’t really describe the problem well. You are not supposed to find the factorial of a number but rather the number of trailing zeros of the factorial. My first instinct was to find the factorial, put the numbers in a string, and iterate through the string to check for zeros. The problem with that algorithm though is that factorials are huge! And moreover, an input could be as large as 1 billion which would produce an astronomically large number. So I read the editorial for the problem and they had a little trick for finding trailing zeros of a factorial which was way more efficient. Here’s a link to the problem. Advertisements One thought on “CodeChef – Factorial” Great work!
https://ccgivens.wordpress.com/2017/03/09/codechef-factorial/
CC-MAIN-2019-35
refinedweb
217
56.45
Named styles are lost on on read Hi, Just saving a file with a style in a cell makes it lose the style in the output file. Here is the test program I used: import openpyxl wb = openpyxl.load_workbook("test.xlsx") #ws = wb.active #style = ws.cell("A1").style #print(style) #ws.cell("B1").style = style #ws.cell("B1").value = u"Output" wb.save("test_output.xlsx") Looking at the print of the style object, it does not seem to have read the 12pt font or the green background color in the first place: Style(font=Font(name='Calibri', sz=11.0, b=False, i=False, u=None, strike=False, color=Color(indexed=Value must be type 'int', auto=Value must be type 'bool', theme=Value must be type 'int'), vertAlign=None, charset=None, outline=False, shadow=False, condense=False, extend=False, family=2.0), fill=PatternFill(patternType=None, fgColor=Color(rgb='00000000', indexed=Value must be type 'int', auto=Value must be type 'bool', theme=Value must be type 'int', tint=0.0, type='rgb'), bgColor=Color(rgb='00000000', indexed=Value must be type 'int', auto=Value must be type 'bool', theme=Value must be type 'int', tint=0.0, type='rgb')), border=Border(left=Side(style=None, color=None), right=Side(style=None, color=None), top=Side(style=None, color=None), bottom=Side(style=None, color=None), diagonal=Side(style=None, color=None), diagonal_direction=None, vertical=None, horizontal=None), alignment=Alignment(horizontal='general', vertical='bottom', textRotation=0, wrapText=False, shrinkToFit=False, indent=0.0, relativeIndent=0.0, justifyLastLine=False, readingOrder=0.0), number_format='General', protection=) The goal is to have the cell on the right (and potentially more) having the same style as the original cell. I attach the input and output XLSX files. The input file was produced on Mac Excel 2010. Sorry I don't have any Windows Excel to test it on. Testing on openpyxl 2.1.2. The output file in my test. It looks like your using a named style. These are currently not supported at all. I'm hoping to add support for them in 2.2 but it requires some reworking of how styles work first. Ho, nice! Indeed I used one of Excel's buil-in styles. When I manually set the font and background, the style is preserved and copied to cell B1. Thanks, this bug is basically solved for me. Feel free to close it if you don't need a ticket on named styles. When not using named styles but inline (?) styles. This is resolved in 2.3. Can you confirm? Yes indeed, I can even use the commented code in my example to copy style from one cell to another. Thank you! Good to know it's working for you. Support is currently only partial – any named styles that are around will be preserved but we're currently not reliably linking individual cell styles to named ones. The goal in the future is to have style definitions that are explicitly shared by lots of cells with only the occasional need for individual cell styles. This should make working with styles a lot easier and faster: we'll just pass around a reference to a named style instead of creating all the relevant objects and seeing if they don't exist already. Removing version: 2.1.x (automated comment)
https://bitbucket.org/openpyxl/openpyxl/issues/381
CC-MAIN-2018-51
refinedweb
561
59.3
On Wed, 2002-11-27 at 09:01, Ian Sparks wrote: > def delete(self): > self.killSelectedRecords() > self.response().redirect(self.response.url()) #pseudocode! This is what I do. It works well. I keep a list of messages to display to the user in the session, so I add a message, redirect, and then the message is displayed when they get to the next page (and removed from the session). If you read the HTTP spec, there's a particular response code that is best to use in this situation, but I can't remember which it is (it's in the 300's). Ian This isn't a webware question per-se but I'd like to ask how people = handle POST's. For instance, I have a page which provides a list of items for deletion. = Check a checkbox on the rows you want to delete and hit the "Delete" = button. def delete(self): self.killSelectedRecords() self.writeHTML() i.e. the user gets a page back with the regenerated list (less those = deleted). The problem for me is the Browser Refresh button. If you click it now = its not going to do what it says ("Refresh"), its going to re-post the = delete action (which could be bad). My temptation is to do : def delete(self): self.killSelectedRecords() self.response().redirect(self.response.url()) #pseudocode! i.e. redirect the browser to do a GET on the page they just POSTed to. Is this what other folks do? What is the approved idiom? What are the = trade-offs and hidden dangers? - Ian Sparks. =20 Ian Sparks wrote: > This isn't a webware question per-se but I'd like to ask how people handle POST's. I think the best thing to do is split it between (a) a page that does presentation logic (ie displays list of items) (b) a separate page that takes querystring or form parameters, and performs some action (ie delete an item from the list), then redirects to (a) or some appropriate destination. Refreshing (a) never performs an action. The user never notices the existence of (b). Nick Always have a hidden field with your forms that has got a random value for "one time" only actions. Generate a suitable number/string and save this to the user session. Then before you perform any actions make sure this value 1) exists, 2) isn't "used", 3) then mark it as used.. Now you should be safe for all kinds of Reloads, Back buttons etc.. Good luck. >This isn't a webware question per-se but I'd like to ask how people >handle POST's. > > >The problem for me is the Browser Refresh button. If you click it >now its not going to do what it says ("Refresh"), its going to >re-post the delete action (which could be bad). Ian Bicking wrote: > If you read the HTTP spec, there's a particular response code that is > best to use in this situation, but I can't remember which it is (it's in > the 300's). Interesting... .
http://sourceforge.net/p/webware/mailman/message/12857442/
CC-MAIN-2014-23
refinedweb
513
72.05
. In this article we'll go over the theory behind transfer learning and see how to carry out an example of transfer learning on Convolutional Neural Networks (CNNs) in PyTorch. - What is PyTorch? - Defining Necessary Terms - Transfer Learning Theory - Image Classification with Transfer Learning in PyTorch - Conclusion What is PyTorch? Pytorch is a library developed for Python, specializing in deep learning and natural language processing. PyTorch takes advantage of the power of Graphical Processing Units (GPUs) to make implementing a deep neural network faster than training a network on a CPU. PyTorch has seen increasing popularity with deep learning researchers thanks to its speed and flexibility. PyTorch sells itself on three different features: - A simple, easy-to-use interface - Complete integration with the Python data science stack - Flexible / dynamic computational graphs that can be changed during run time (which makes training a neural network significantly easier when you have no idea how much memory will be required for your problem). PyTorch is compatible with NumPy and it allows NumPy arrays to be transformed into tensors and vice versa. Defining Necessary Terms Before we go any further, let's take a moment to define some terms related to Transfer Learning. Getting clear on our definitions will make understanding of the theory behind transfer learning and implementing an instance of transfer learning easier to understand and replicate. What is Deep Learning? Deep learning is a subsection of machine learning, and machine learning can be described as simply the act of enabling computers to carry out tasks without being explicitly programmed to do so. Deep Learning systems utilize neural networks, which are computational frameworks modeled after the human brain. Neural networks have three different components: An input layer, a hidden layer or middle layer, and an output layer. The input layer is simply where the data that is being sent into the neural network is processed, while the middle layers/hidden layers are comprised of a structure referred to as a node or neuron. These nodes are mathematical functions which alter the input information in some way and passes on the altered data to the final layer, or the output layer. Simple neural networks can distinguish simple patterns in the input data by adjusting the assumptions, or weights, about how the data points are related to one another. A deep neural network gets its name from the fact that it is made out of many regular neural networks joined together. The more neural networks are linked together, the more complex patterns the deep neural network can distinguish and the more uses it has. There are different kinds of neural networks, which each type having its own specialty. For example, Long Short Term Memory deep neural networks are networks that work very well when handling time sensitive tasks, where the chronological order of data is important, like text or speech data. What is a Convolutional Neural Network? This article will be concerned with Convolutional Neural Networks, a type of neural network that excels at manipulating image data. Convolutional Neural Networks (CNNs) are special types of neural networks, adept at creating representations of visual data. The data in a CNN is represented as a grid which contains values that represent how bright, and what color, every pixel in the image is. A CNN is broken down into three different components: the convolutional layers, the pooling layers, and the fully connected layers. The responsibility of the convolutional layer is to create a representation of the image by taking the dot product of two matrices. The first matrix is a set of learnable parameters, referred to as a kernel. The other matrix is a portion of the image being analyzed, which will have a height, a width, and color channels. The convolutional layers are where the most computation happens in a CNN. The kernel is moved across the entire width and height of the image, eventually producing a representation of the entire image that is two-dimensional, a representation known as an activation map. Due to the sheer amount of information contained in the CNN's convolutional layers, it can take an extremely long time to train the network. The function of the pooling layers is to reduce the amount of information contained in the CNNs convolutional layers, taking the output from one convolutional layer and scaling it down to make the representation simpler. The pooling layer accomplishes this by looking at different spots in the network's outputs and "pooling" the nearby values, coming up with a single value that represents all the nearby values. In other words, it takes a summary statistic of the values in a chosen region. Summarizing the values in a region means that the network can greatly reduce the size and complexity of its representation while still keeping the relevant information that will enable the network to recognize that information and draw meaningful patterns from the image. There are various functions that can be used to summarize a region's values, such as taking the average of a neighborhood - or Average Pooling. A weighted average of the neighborhood can also be taken, as can the L2 norm of the region. The most common pooling technique is Max Pooling, where the maximum value of the region is taken and used to represent the neighborhood. The fully connected layer is where all the neurons are linked together, with connections between every preceding and succeeding layer in the network. This is where the information that has been extracted by the convolutional layers and pooled by the pooling layers is analyzed, and where patterns in the data are learned. The computations here are carried out through matrix multiplication combined with a bias effect. There are also several nonlinearities present in the CNN. When considering that images themselves are non-linear things, the network has to have nonlinear components to be able to interpret the image data. The nonlinear layers are usually inserted into the network directly after the convolutional layers, as this gives the activation map non-linearity. There are a variety of different nonlinear activation functions that can be used for the purpose of enabling the network to properly interpret the image data. The most popular nonlinear activation function is ReLu, or the Rectified Linear Unit. The ReLu function turns nonlinear inputs into a linear representation by compressing real values to only positive values above 0. To put that another way, the ReLu function takes any value above zero and returns it as is, while if the value is below zero it is returned as zero. The ReLu function is popular because of its reliability and speed, performing around six times faster than other activation functions. The downside to ReLu is that it can easily get stuck when handling large gradients, never updating the neurons. This problem can be tackled by setting a learning rate for the function. Two other popular nonlinear functions are the sigmoid function and the Tanh function. The sigmoid function works by taking real values and squishing them to a range between 0 and 1, although it has problems handling activations that are near the extremes of the gradient, as the values become almost zero. Meanwhile, the Tanh function operates similarly to the Sigmoid, except that its output is centered near zero and it squishes the values to between -1 and 1. Training and Testing There are two different phases to creating and implementing a deep neural network: training and testing. The training phase is where the network is fed the data and it begins to learn the patterns that the data contains, adjusting the weights of the network, which are assumptions about how the data points are related to each other. To put that another way, the training phase is where the network "learns" about the data is has been fed. The testing phase is where what the network has learned is evaluated. The network is given a new set of data, one it hasn't seen before, and then the network is asked to apply its guesses about the patterns it has learned to the new data. The accuracy of the model is evaluated and typically the model is tweaked and retrained, then retested, until the architect is satisfied with the model's performance. In the case of transfer learning, the network that is used has been pretrained. The network's weights have already been adjusted and saved, so there's no reason to train the entire network again from scratch. This means that the network can immediately be used for testing, or just certain layers of the network can be tweaked and then retrained. This greatly speeds up the deployment of the deep neural network. What is Transfer Learning? The idea behind transfer learning is taking a model trained on one task and applying to a second, similar task. The fact that a model has already had some or all of the weights for the second task trained means that the model can be implemented much quicker. This allows rapid performance assessment and model tuning, enabling quicker deployment overall. Transfer learning is becoming increasingly popular in the field of deep learning, thanks to the vast amount of computational resources and time needed to train deep learning models, in addition to large, complex datasets. The primary constraint of transfer learning is that the model features learned during the first task are general, and not specific to the first task. In practice, this means that models trained to recognize certain types of images can be reused to recognize other images, as long as the general features of the images are similar. Transfer Learning Theory The utilization of transfer learning has several important concepts. In order to understand the implementation of transfer learning, we need go over what a pre-trained model looks like, and how that model can be fine-tuned for your needs. There are two ways to choose a model for transfer learning. It is possible to create a model from scratch for your own needs, save the model's parameters and structure, and then reuse the model later. The second way to implement transfer learning is to simply take an already existing model and reuse it, tuning its parameters and hyperparameters as you do so. In this instance, we will be using a pretrained model and modifying it. After you've decided what approach you want to use, choose a model (if you are using a pretrained model). There is a large variety of pretrained models that can be used in PyTorch. Some of the pretrained CNNs include: - AlexNet - CaffeResNet - Inception - The ResNet series - The VGG series These pretrained models are accessible through PyTorch's API and when instructed, PyTorch will download their specifications to your machine. The specific model we are going to be using is ResNet34, part of the Resnet series. The Resnet model was developed and trained on an ImageNet dataset as well as the CIFAR-10 dataset. As such it is optimized for visual recognition tasks, and showed a marked improvement over the VGG series, which is why we will be using it. However, other pretrained models exist, and you may want to experiment with them to see how they compare. As PyTorch's documentation on transfer learning explains, there are two major ways that transfer learning is used: fine-tuning a CNN or by using the CNN as a fixed feature extractor. When fine-tuning a CNN, you use the weights the pretrained network has instead of randomly initializing them, and then you train like normal. In contrast, a feature extractor approach means that you'll maintain all the weights of the CNN except for those in the final few layers, which will be initialized randomly and trained as normal. Fine-tuning a model is important because although the model has been pretrained, it has been trained on a different (though hopefully similar) task. The densely connected weights that the pretrained model comes with will probably be somewhat insufficient for your needs, so you will likely want to retrain the final few layers of the network. In contrast, because the first few layers of the network are just feature extraction layers, and they will perform similarly on similar images, they can be left as they are. Therefore, if the dataset is small and similar, the only training that needs to be done is the training of the final few layers. The larger and more complex the dataset gets, the more the model will need to be retrained. Remember that transfer learning works best when the dataset you are using is smaller than the original pre-trained model, and similar to the images fed to the pretrained model. Working with transfer learning models in Pytorch means choosing which layers to freeze and which to unfreeze. Freezing a model means telling PyTorch to preserve the parameters (weights) in the layers you've specified. Unfreezing a model means telling PyTorch you want the layers you've specified to be available for training, to have their weights trainable. After you've concluded training your chosen layers of the pretrained model, you'll probably want to save the newly trained weights for future use. Even though using a pre-trained models is faster than and training a model from scratch, it still takes time to train, so you'll want to copy the best model weights. to train on. The Stanford Cats and Dogs dataset is a very commonly used dataset, chosen for how simple yet illustrative the set is. You can download this right here. Be sure to divide the dataset into two equally sized sets: "train" and "val". You can do this anyway that you would like, by manually moving the files or by writing a function to handle it. You may also want to limit the dataset to a smaller size, as it comes with almost 12,000 images in each category, and this will take a long time to train. You may want to cut that number down to around 5000 in each category, with 1000 set aside for validation. However, the number of images you want to use for training is up to you. Here's one way to prepare the data for use: import os import shutil import re base_dir = "PetImages/" # Create training folder files = os.listdir(base_dir) # Moves all training cat images to cats folder, training dog images to dogs folder def train_maker(name): train_dir = f"{base_dir}/train/{name}" for f in files: search_object = re.search(name, f) if search_object: shutil.move(f'{base_dir}/{name}', train_dir) train_maker("Cat") train_maker("Dog") # Make the validation directories try: os.makedirs("val/Cat") os.makedirs("val/Dog") except OSError: print ("Creation of the directory %s failed") else: print ("Successfully created the directory %s ") # Create validation folder cat_train = base_dir + "train/Cat/" cat_val = base_dir + "val/Cat/" dog_train = base_dir + "train/Dog/" dog_val = base_dir + "val/Dog/" cat_files = os.listdir(cat_train) dog_files = os.listdir(dog_train) # This will put 1000 images from the two training folders # into their respective validation folders for f in cat_files: validationCatsSearchObj = re.search("5\d\d\d", f) if validationCatsSearchObj: shutil.move(f'{cat_train}/{f}', cat_val) for f in dog_files: validationCatsSearchObj = re.search("5\d\d\d", f) if validationCatsSearchObj: shutil.move(f'{dog_train}/{f}', dog_val) After we have selected and prepared the data, we can start off by importing all the necessary libraries. We'll need many of the Torch packages like nn neural network, the optimizers and the DataLoaders. We'll also want matplotlib to visualize some of our training examples. We need numpy to handle the creation of data arrays, as well as a few other miscellaneous modules: from __future__ import print_function, division import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import numpy as np import time import os import copy To start off with, we need to load in our training data and prepare it for use by our neural network. We're going to be making use of Pytorch's transforms for that purpose. We'll need to make sure the images in the training set and validation set are the same size, so we'll be using transforms.Resize. We'll also be doing a little data augmentation, trying to improve the performance of our model by forcing it to learn about images at different angles and crops, so we'll randomly crop and rotate the images. Next, we'll make tensors out of the images, as PyTorch works with tensors. Finally, we'll normalize the images, which helps the network work with values that may be have a wide range of different values. We then compose all our chosen transforms. Note that the validation transforms don't have any of the flipping or rotating, as they aren't part of our training set, so the network isn't learning about them: # Make transforms and use data loaders # We'll use these a lot, so make them variables mean_nums = [0.485, 0.456, 0.406] std_nums = [0.229, 0.224, 0.225] chosen_transforms = {'train': transforms.Compose([ transforms.RandomResizedCrop(size=256), transforms.RandomRotation(degrees=15), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean_nums, std_nums) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean_nums, std_nums) ]), } Now we will set the directory for our data and use PyTorch's ImageFolder function to create datasets: # Set the directory for the data data_dir = '/data/' # Use the image folder function to create datasets chosen_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), chosen_transforms[x]) for x in ['train', 'val']} Now that we have chosen the image folders we want, we need to use the DataLoaders to create iterable objects for us to work with. We tell it which datasets we want to use, give it a batch size, and shuffle the data. # Make iterables with the dataloaders dataloaders = {x: torch.utils.data.DataLoader(chosen_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']} We're going to need to preserve some information about our dataset, specifically the size of the dataset and the names of the classes in our dataset. We also need to specify what kind of device we are working with, a CPU or GPU. The following setup will use GPU if available, otherwise CPU will be used: dataset_sizes = {x: len(chosen_datasets[x]) for x in ['train', 'val']} class_names = chosen_datasets['train'].classes device = torch.device("cuda" if torch.cuda.is_available() else "cpu") Now let's try visualizing some of our images with a function. We'll take an input, create a Numpy array from it, and transpose it. Then we'll normalize the input using mean and standard deviation. Finally, we'll clip values to between 0 and 1 so there isn't a massive range in the possible values of the array, and then show the image: # Visualize some images def imshow(inp, title=None): inp = inp.numpy().transpose((1, 2, 0)) mean = np.array([mean_nums]) std = np.array([std_nums]) inp = std * inp + mean inp = np.clip(inp, 0, 1) plt.imshow(inp) if title is not None: plt.title(title) plt.pause(0.001) # Pause a bit so that plots are updated Now let's use that function and actually visualize some of the data. We're going to get the inputs and the name of the classes from the DataLoader and store them for later use. Then we'll make a grid to display the inputs on and display them: # Grab some of the training data to visualize inputs, classes = next(iter(dataloaders['train'])) # Now we construct a grid from batch out = torchvision.utils.make_grid(inputs) imshow(out, title=[class_names[x] for x in classes]) Setting up a Pretrained Model Now we have to set up the pretrained model we want to use for transfer learning. In this case, we're going to use the model as is and just reset the final fully connected layer, providing it with our number of features and classes. When using pretrained models, PyTorch sets the model to be unfrozen (will have its weights adjusted) by default. So we'll be training the whole model: # Setting up the model # load in pretrained and reset final fully connected res_mod = models.resnet34(pretrained=True) num_ftrs = res_mod.fc.in_features res_mod.fc = nn.Linear(num_ftrs, 2) If this still seems somewhat unclear, visualizing the composition of the model may help. for name, child in res_mod.named_children(): print(name) Here's what that returns: conv1 bn1 relu maxpool layer1 layer2 layer3 layer4 avgpool fc Notice the final portion is fc, or "Fully-Connected". This is the only layer we are modifying the shape of, giving it our two classes to output. Essentially, we're going to be changing the outputs of the final fully connected portion to just two classes, and adjusting the weights for all the other layers. Now we need to send our model to our training device. We also need to choose the loss criterion and optimizer we want to use with the model. CrossEntropyLoss and the SGD optimizer are good choices, though there are many others. We'll also be choosing a learning rate scheduler, which decreases the learning rate of the optimizer overtime and helps prevent non-convergence due to large learning rates. You can learn more about learning rate schedulers here if you are curious: res_mod = res_mod.to(device) criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(res_mod.parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) Now we just need to define the functions that will train the model and visualize the predictions. Let's start off with the training function. It will take in our chosen model as well as the optimizer, criterion, and scheduler we chose. We'll also specify a default number of training epochs. Every epoch will have a training and validation phase. To begin with, we set the model's initial best weights to those of the pretrained mode, by using state_dict. Now, for every epoch in the chosen number of epochs, if we are in the training phase, we will: - Decrement the learning rate - Zero the gradients - Carry out the forward training pass - Calculate the loss - Do backward propagation and update the weights with the optimizer We'll also be keeping track of the model's accuracy during the training phase, and if we move to the validation phase and the accuracy has improved, we'll save the current weights as the best model weights: def train_model(model, criterion, optimizer, scheduler, num_epochs=10): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': scheduler.step() model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode current_loss = 0.0 current_corrects = 0 # Here's where the training happens print('Iterating through data...') for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # We need to zero the gradients, don't forget it optimizer.zero_grad() # Time to carry out the forward training poss # We only need to log the loss stats if we are in training phase with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # We want variables to hold the loss statistics current_loss += loss.item() * inputs.size(0) current_corrects += torch.sum(preds == labels.data) epoch_loss = current_loss / dataset_sizes[phase] epoch_acc = current_corrects.double() / dataset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) # Make a copy of the model if the accuracy on the validation set has improved if phase == 'val' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) print() time_since = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_since // 60, time_since % 60)) print('Best val Acc: {:4f}'.format(best_acc)) # Now we'll load in the best model weights and return it model.load_state_dict(best_model_wts) return model Our training printouts should look something like this: Epoch 0/25 ---------- Iterating through data... train Loss: 0.5654 Acc: 0.7090 Iterating through data... val Loss: 0.2726 Acc: 0.8889 Epoch 1/25 ---------- Iterating through data... train Loss: 0.5975 Acc: 0.7090 Iterating through data... val Loss: 0.2793 Acc: 0.8889 Epoch 2/25 ---------- Iterating through data... train Loss: 0.5919 Acc: 0.7664 Iterating through data... val Loss: 0.3992 Acc: 0.8627 Visualization Now we'll create a function that will let us see the predictions our model has made. def visualize_model(model, num_images=6): was_training = model.training model.eval() images_handeled = 0 fig = plt.figure() with torch.no_grad(): for i, (inputs, labels) in enumerate(dataloaders['val']): inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) _, preds = torch.max(outputs, 1) for j in range(inputs.size()[0]): images_handeled += 1 ax = plt.subplot(num_images//2, 2, images_handeled) ax.axis('off') ax.set_title('predicted: {}'.format(class_names[preds[j]])) imshow(inputs.cpu().data[j]) if images_handeled == num_images: model.train(mode=was_training) return model.train(mode=was_training) Now we can tie everything together. We'll train the model on our images and show the predictions: base_model = train_model(res_mod, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=3) visualize_model(base_model) plt.show() That training will probably take you a long while if you are using a CPU and not a GPU. It will still take some time even if using a GPU. Fixed Feature Extractor It is due to the long training time that many people choose to simply use the pretrained model as a fixed feature extractor, and only train the last layer or so. This significantly speeds up training time. In order to do that, you'll need to replace the model we've built. There will be a link to a GitHub repo for both versions of the ResNet implementation. Replace the section where the pretrained model is defined with a version that freezes the weights and doesn't carry our gradient calculations or backprop. It looks quite similar to before, except that we specify that the gradients don't need computation: # Setting up the model # Note that the parameters of imported models are set to requires_grad=True by default res_mod = models.resnet34(pretrained=True) for param in res_mod.parameters(): param.requires_grad = False num_ftrs = res_mod.fc.in_features res_mod.fc = nn.Linear(num_ftrs, 2) res_mod = res_mod.to(device) criterion = nn.CrossEntropyLoss() # Here's another change: instead of all parameters being optimized # only the params of the final layers are being optimized optimizer_ft = optim.SGD(res_mod.fc.parameters(), lr=0.001, momentum=0.9) exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) What if we wanted to selectively unfreeze layers and have the gradients computed for just a few chosen layers. Is that possible? Yes, it is. Let's print out the children of the model again to remember what layers/components it has: for name, child in res_mod.named_children(): print(name) Here's the layers: conv1 bn1 relu maxpool layer1 layer2 layer3 layer4 avgpool fc Now that we know what the layers are, we can unfreeze ones we want, like just layers 3 and 4: for name, child in res_mod.named_children(): if name in ['layer3', 'layer4']: print(name + 'has been unfrozen.') for param in child.parameters(): param.requires_grad = True else: for param in child.parameters(): param.requires_grad = False Of course, we'll also need to update the optimizer to reflect the fact that we only want to optimize certain layers. optimizer_conv = torch.optim.SGD(filter(lambda x: x.requires_grad, res_mod.parameters()), lr=0.001, momentum=0.9) So now you know that you can tune the entire network, just the last layer, or something in between. Conclusion Congratulations, you've now implemented transfer learning in PyTorch. It would be a good idea to compare the implementation of a tuned network with the use of a fixed feature extractor to see how the performance differs. Experimenting with freezing and unfreezing certain layers is also encouraged, as it lets you get a better sense of how you can customize the model to fit your needs. Here's some other things you can try: - Using different pretrained models to see which ones perform better under different circumstances - Changing some of the arguments of the model, like adjusting learning rate and momentum - Try classification on a dataset with more than two classes If you're curious to learn more about different transfer learning applications and the theory behind it, there's an excellent breakdown of some of the math behind it as well as use cases here. The code for this article can be found in this GitHub repo.
https://stackabuse.com/image-classification-with-transfer-learning-and-pytorch/
CC-MAIN-2019-35
refinedweb
4,827
54.73
<< 14 replies to this topic #13 Posted 23 October 2012 - 11:52 AM Are you sure that you're supposed to scramble each word, or just scramble the list? - 0 My Blog: "Women and Music: I'm always amazed by other people's choices." - David Lee Roth "Women and Music: I'm always amazed by other people's choices." - David Lee Roth Recommended for you: Get network issues from WhatsUp Gold. Not end users. #14 Posted 23 October 2012 - 01:29 PM Are you sure that you're supposed to scramble each word, or just scramble the list? Yes scramble each word independently, However now I have a new issue.. I need to trim the white space out if a user enters a guess like "gussing " here is all my code.. everything works except the trim function.. not sure how to implement this.. Thank you again! #include <string.h> #include <stdio.h> #include <stdlib.h> #include <time.h> #include <ctype.h> char *scrambled; char *original; char input[20]; void trim(char *string){ int len = strlen(string)+1; int i, space = 0, c; char *fix = (char *) malloc(len); for(i=0;i<len;i++){ c = fix[i]; if(isspace(c)){ space = 1; } } printf("\ntestrrr: %d", space); } void scramble(char *string){ scrambled = (char *) malloc(strlen(string)+1); char temp; int num1,num2, i; strcpy(scrambled, string); srand(time(NULL)); for(i = 0;i<=200;i++){ //here I'll randomize and scrambled word num1 = rand()%(strlen(string)); num2 = rand()%(strlen(string)); temp = scrambled[num1]; scrambled[num1] = scrambled[num2] ; scrambled[num2] = temp; } } void scrambleGame(char *original, char *scrambled){ printf("\n\nOriginal: %s Scrambled: %s", original, scrambled); int i = 0, correct = 0; printf("\n\nYou have three guesses..GOOD LUCK!\n\nUnscramble this word: %s", scrambled); while(i < 3 && correct == 0){ printf("\n\n\n\nYour Guess: "); gets(input); trim(input); if(strcmp(original, input) == 0){ printf("\n\nCongratulations You Win This Round!"); correct++; } else if(i != 2){ printf("\nNOPE..Try again"); } i++; } if(correct == 0){ printf("\n\nSorry your a loser this round"); } } void main(){ int select,done = 0; const int totalWords = 32; char *words[32] = { "pumpkin", "cantalope", "watermelon", "apple", "kumquat", "sixteen", "blue", "phone", "juice", "notebook", "telephone", "baseball", "video", "programming", "string", "function", "time", "table", "paper", "elevator", "movie", "computer", "soda", "light", "photography", "mexico", "africa", "united", "legal", "traffic", "glass", "learning"}; char check[20]; printf("Welcome to the Word scrambler Game. You will be shown a scrambled word. unscramble the letters and guess the word. You have three tries per word."); srand(time(NULL)); select = rand()%(totalWords); original = words[select]; scramble(original); scrambleGame(original,scrambled); while(done != 1){ printf("\n\nWould you like to play again? 'Y' or 'N' "); gets(check); if(strcmp("Y", check) == 0){ srand(time(NULL)); select = rand()%(totalWords); original = words[select]; scramble(original); scrambleGame(original,scrambled); check[0] = 'X'; } else{ done = 1; } } } - 0 Your Friendly Neighborhood Pally #15 Posted 23 October 2012 - 01:45 PM Either use strtok() to extract the string as a token delimited by whitespace characters, or implement it with loops. isspace() from ctype.h will be of some help to you here. - 0 Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. – Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid Also tagged with one or more of these keywords: c, string Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
http://forum.codecall.net/topic/72571-passing-strings-randomizing/page-2
CC-MAIN-2020-40
refinedweb
567
55.24
Attaches a shared memory segment or a mapped file to the current process. Standard C Library (libc.a) #include <sys/shm.h> void *shmat (SharedMemoryID, SharedMemoryAddress, SharedMemoryFlag) intSharedMemoryID, SharedMemoryFlag; const void *SharedMemoryAddress; The shmat subroutine attaches the shared memory segment or mapped file specified by the SharedMemoryID parameter (returned by the shmget subroutine), or file descriptor specified by the SharedMemoryID parameter (returned by the openx subroutine) to the address space of the calling process. The following limits apply to shared memory: Note: The following applies to AIX Version 4.2.1 and later releases for 32-bit processes only. An extended shmat capability is available. If an environment variable EXTSHM=ON is defined then processes executing in that environment will be able to create and attach more than eleven shared memory segments. The segments can be of size from 1 byte to 2GB, although for segments larger than 256MB in size the environment variable EXTSHM=ON is ignored. The process can attach these segments into the address space for the size of the segment. Another segment could be attached at the end of the first one in the same 256MB segment region. The address at which a process can attach is at page boundaries - a multiple of SHMLBA_EXTSHM bytes. For segments larger than 256MB in size, the address at which a process can attach is at 256MB boundaries, which is a multiple of SHMLBA bytes. The segments can be of size from 1 byte to 256MB. The process can attach these segments into the address space for the size of the segment. Another segment could be attached at the end of the first one in the same 256MB segment region. The address at which a process can attach will be at page boundaries - a multiple of SHMLBA_EXTSHM bytes. The maximum address space available for shared memory with or without the environment variable and for memory mapping is 2.75GB. An additional segment register "0xE" is available so that the address space is from 0x30000000 to 0xE0000000. However, a 256MB region starting from 0xD0000000 will be used by the shared libraries and is therefore unavailable for shared memory regions or mmapped regions. There are some restrictions on the use of the extended shmat feature. These shared memory regions can not be used as I/O buffers where the unpinning of the buffer occurs in an interrupt handler. The restrictions on the use are the same as that of mmap buffers. The smaller region sizes are not supported for mapping files. Regardless of whether EXTSHM=ON or not, mapping a file will consume at least 256MB of address space. The SHM_SIZE shmctl command is not supported for segments created with EXTSHM=ON. A segment created with EXTSHM=ON can be attached by a process without EXTSHM=ON. This will consume a 256MB area of the address space irrespective of the size of the shared memory region. A segment created without EXTSHM=ON can be attached by a process with EXTSHM=ON. This will consume a 256MB area of the address space irrespective of the size of the shared memory region. The environment variable provides the option of executing an application either with the additional functionality of attaching more than 11 segments when EXTSHM=ON, or the higher-performance access to 11 or fewer segments when the environment variable is not set. When successful, the segment start address of the attached shared memory segment or mapped file is returned. Otherwise, the shared memory segment is not attached, the errno global variable is set to indicate the error, and a value of -1 is returned. The shmat subroutine is unsuccessful and the shared memory segment or mapped file is not attached if one or more of the following are true: This subroutine is part of Base Operating System (BOS) Runtime. The exec subroutine, exit subroutine, fclear subroutine, fork subroutine, fsync subroutine, mmap subroutine, munmap subroutine, openx subroutine, truncate subroutine, readvx subroutine, shmctl subroutine, shmdt subroutine, shmget subroutine, writevx subroutine. List of Memory Manipulation Services, Subroutines Overview, Understanding Memory Mapping in AIX Version 4.3 General Programming Concepts: Writing and Debugging Programs.
https://sites.ualberta.ca/dept/chemeng/AIX-43/share/man/info/C/a_doc_lib/libs/basetrf2/shmat.htm
CC-MAIN-2022-40
refinedweb
687
53.1
Hello, i am creating a two file editors plugin. First editor is implicit, and second is my graphic designer. This first editor is asiciated with .jsl files, which basically contains content of standard XML Job Definition File for Batch Processing. What i want to do is to use standard syntax highlighting, code-completion and refactoring, which is standardly provided for XML files with this namespace "" but in addition i need to add an extra ID attribute for one of elements. Is there a simple way how to do it please? Yes, this is possible using DomExtender and adding additional DOM for the element. You can find the existing DOM for JEE Batch in package com.intellij.batch, unfortunately it's not opensource. I got an another question connected with this namespace. Is it true, that Community Edition of IDEA doesn't have native support for such things like code-completion, refactoring and error highlighting for batch jobs with namespace i mentioned earlier? Indeed, Community Edition does not support any of the JEE features, please see
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206108179-Is-there-a-simple-way-how-to-customize-one-of-predefined-XML-Schemas-for-XmlLikeFileType-
CC-MAIN-2020-16
refinedweb
175
57.16
Hopefully, this will be simple: I have a document comprising two pages of single-line paragraphs, i.e. each line consists of only a few words with a carriage-return at the end. I want to highlight the entire line of text (sans carriage return), copy it to the clipboard and paste it. The thing is, Word insists on highlighting the return at the end of the row whenever I use the mouse. It doesn't matter whether I drag from left to right or from right to left. I've turned of the "smart" settings in the Edit tab of the Tools: Options menu, but Word still insists that I really want that stinkin' return (which I most definitely do not). Is there a way to turn it off?
http://windowssecrets.com/forums/showthread.php/55970-Auto-hassle-(Word-XP)
CC-MAIN-2013-20
refinedweb
130
78.69
The script creates a series of locators that define a vector and hooks it up to the initial velocity and spin of the rigid bodies, giving the artist an intuitive way to visualize the direction the rigid body will follow. Tested on Maya 2011 and above in Mac and Windows, should work in Linux but haven't tested it myself. To run this script you'll need to copy it to your python enabled scripts folder in your Maya directory and type the following command in the Python command line: import dynamica_velocity_helpers_UI Please use the Feature Requests to give me ideas. Please use the Support Forum if you have any questions or problems. Please rate and review in the Review section.
https://www.highend3d.com/maya/script/dynamica-bullet-velocity-helpers-for-maya
CC-MAIN-2017-22
refinedweb
121
64.24
Apache OpenOffice (AOO) Bugzilla – Issue 37151 Inserting External Data not possible from TXT file Last modified: 2004-11-15 08:31:08 UTC I am trying to insert data from a TXT file into the spreadsheet program. By going to Insert > External Data, I browse my local filesystem, find my .txt file with tabs denoted the columns, and carriage returns denoting the rows. The Text Import window appears and all of the settings are correct. I press OK to get back to the External Data window, but I cannot select the Rows or Columns to import (that field is empty/blank. The OK button is greyed out. There is a slight work around, which is by going to Insert > Sheet, and selecting from File. The same Text Import window appears with the same settings. This will work and write the data from the TXT document to the spreadsheet, but I need the self updating feature of the Insert > External Data to be working. Hi, using the query facilities of Issuezilla would lead you to Issue 1834 which is a double to this one. FRank *** This issue has been marked as a duplicate of 1834 *** closed double
https://bz.apache.org/ooo/show_bug.cgi?id=37151
CC-MAIN-2020-24
refinedweb
196
70.73
In the previous post we have learned how to build a multi-container application. On top of the website we have built in the second post of the series, we have added a Web API, which is leveraged by the web application to display the list of posts published on this blog. The two applications have been deployed in two different containers but, thanks to the bridge network offered by the Docker, we've been able to put them in communication. In this post we're going to see two additional steps: - We're going to add a new service to the solution. In all the previous posts we have built custom images, since we needed to run inside a container an application we have built. This time, instead, we're going to use a service as it is: Redis. - We're going to see how we can easily deploy our multi-container application. The approach we've seen in the previous post (using the docker run command on each container) isn't very practical in the real world, where all the containers must be deployed simultaneously and you may have hundreds of them. Let's start! Adding a Redis cache to our application Redis is one of the most popular caching solutions on the market. At its core, it's a document-oriented database which can store key-value pairs. It's used to improve performances and reliability of applications, thanks to its main features: - It's an in-memory database, which makes all the writing and reading operations much faster compared to a traditional database which persists the data into the disk. - It supports replication. - It supports clustering. We're going to use it in our Web API to store the content of the RSS feed. This time, if the RSS feed has already been downloaded, we won't download it again but we will retrieve it from the Redis cache. As first step, we need to host Redis inside a container. The main difference compared to what we have done in the previous posts is that Redis is a service, not a framework or a platform. Our web application will use it as it is, we don't need to build an application on top of it like we did with the .NET Core image. This means that we don't need to build a custom image, but we can just use the official one provided by the Redis team. We can use the standard docker run command to initialize our container: docker run --rm --network my-net --name rediscache -d redis As usual, the first time we execute this command Docker will pull the image from Docker Hub. Notice how, also in this case (like we did for the WebAPI in the previous post, we aren't exposing the service on the host machine through a port. The reason is that the Redis cache will be leveraged only by the Web API, so we need it to be accessible only from the other containers. We have also set the name of the container (rediscache) and we have connected it to the bridge called my-net, which we have created in the previous post. Feel free to run the docker ps command to check that everything is up & running: PS C:\Users\mpagani\Source\Samples\NetCoreApi\WebSample> docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1475ccd3ec0a redis "docker-entrypoint.s…" 4 seconds ago Up 2 seconds 6379/tcp rediscache Now we can start tweaking our WebAPI to use the Redis cache. Open with Visual Studio Code the Web API project we have previously built and move to the NewsController.cs file. The easiest way to leverage a Redis cache in a .NET Core application is using a NuGet package called StackExchange.Redis. To add it to your project open the terminal and run: dotnet add package StackExchange.Redis Once the package has been installed, we can start changing the code of the GetAsync() method declared in the NewsController class in the following way: [HttpGet] public async Task<ActionResult<IEnumerable<string>>> Get() { ConnectionMultiplexer connection = await ConnectionMultiplexer.ConnectAsync("rediscache"); var db = connection.GetDatabase(); List<string> news = new List<string>(); string rss = string.Empty; rss = await db.StringGetAsync("feedRss"); if (string.IsNullOrEmpty(rss)) { HttpClient client = new HttpClient(); rss = await client.GetStringAsync(""); await db.StringSetAsync("feedRss", rss); } else { news.Add("The RSS has been returned from the Redis cache"); }; } In order to let this code compile, you will need to add the following namespace at the top of the class: using StackExchange.Redis; In the first line we're using the ConnectionMultiplexer class to connect to the instance of the Redis cache we have created. Since we have connected the container to a custom network, we can reference it using its name rediscache, which will be resolved by the internal DNS, as we have learned in the previous post. Then we get a reference to the database calling the GetDatabase() method. Once we have a database, using it is really simple since it offers a set of get and set methods for the various data types supported by Redis. In our scenario we need to store the RSS feed, which is a string, so we use the StringGetAsync() and StringSetAsync() methods. At first, we try to retrieve from the cache a value identified by the key feedRss. If it's null, it means that the cache is empty, so we need to download the RSS first. Once we have downloaded it, we store it in the cache with the same key. Just for testing purposes, in case the RSS is coming from the cache and not from the web, we add an extra item in the returned list. This way, it will be easier for us to determine if our caching implementation is working. Now that we have finished our work, we can build an updated image as usual, by right clicking on the Dockerfile in Visual Studio Code and choosing Build Image or by running the following command: docker build -t qmatteoq/testwebapi . We don't need, instead, to change the code of the web application. The Redis cache will be completely transparent to it. We are ready to run again our containers. Also in this case, we're going to connect them to same bridge of the Redis cache and we're going to assign a fixed name: docker run --rm -p 8080:80 --name webapp --network my-net -d qmatteoq/testwebapp docker run --rm --name newsfeed --network my-net -d qmatteoq/testwebapi The first command launches the container with the website (thus, the 80 port is exposed to the 8080 port on machine), while the second one launches the Web API (which doesn't need to be exposed to the host, since it will be consumed only by the web application). Now open your browser and point it to. The first time it should take a few second to start, because we're performing the request against the online RSS feed. You should see, in the main page, just the list of posts from this blog: Now refresh the page. This time the operation should be much faster and, as first post of the list, you should see the test item we have added in code when the feed is retrieved from the Redis cache: Congratulations! You have added a new service to your multi-container application! Deploy your application If you think about using Docker containers in production or with a really complex application, you can easily understand all the limitations of the approach we have used so far. When you deploy an application, you need all the containers to start as soon as possible. Manually launching docker run for each of them isn't a practical solution. Let's introduce Docker Compose! It's another command line tool, included in Docker for Windows, which can be used to compose multi-container applications. Thanks to a YAML file, you can describe the various services you need to run to boot your application. Then, using a simple command, Docker is able to automatically run or stop all the required containers. In this second part of the blog we're going to use Docker Compose to automatically start all the containers required by our application: the web app, the Web API and the Redis cache. The first step is to create a new file called docker-compose.yml inside a folder. It doesn't have to be a folder which contains a specific project, but it can be any folder since Docker Compose works with existing images that you should have already built. Let's see the content of our YAML file: version: '3' services: web: image: qmatteoq/testwebapp ports: - "8080:80" container_name: webapp newsfeed: image: qmatteoq/webapitest container_name: newsfeed redis: image: redis container_name: rediscache As you can see, the various commands are pretty easy to understand. version is used to set the Docker Compose file versioning we want to use. In this case, we're using the latest one. Then we create a section called services, which specifies each service we need to run for our multi-container application. For each service, we can specify different parameters, based on the configuration we want to achieve. The relevant ones we use are: - image, to define which is the image we want to use for this container - ports, to define which ports we want to expose to the host. It's the equivalent of the -p parameter of the docker run command - container_name, to define which name we want to assign to the container. It's the equivalent of the --name parameter of the docker run command. Once we have built our Docker Compose file, we can run it by opening a terminal on the same folder and launching the following command: PS C:\Users\mpagani\Source\Samples\NetCoreApi> docker-compose -f "docker-compose.yml" up -d --build Creating network "netcoreapi_default" with the default driver Creating newsfeed ... done Creating webapp ... done Creating rediscache ... done Did you see what just happened? Docker Compose has automatically created a new network bridge called netcoreapi_default for us and it has attached all the containers to it. If, in fact, you execute the docker network ls command, you will find this new bridge in the list: 7d245b95b7c4 netcoreapi_default bridge local a29b7408ba22 none null local Thanks to this approach, we didn't have to do anything special to create a custom network and assign all the containers to it, like instead we did in the previous post. If you remember, we had to manually create a new network and then, with the docker run command, add some additional parameters to make sure that the containers were connected to it instead of the default Docker bridge. This way, we don't have to worry about the DNS. As long as the container names we have specified inside the Docker Compose file are the same we use in our applications, we're good to go! Docker Compose, under the hood, uses the standard Docker commands. As such, if you run docker ps, you will simply see the 3 containers described in the YAML file up & running: PS C:\Users\mpagani\Source\Samples\NetCoreApi> docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ce786b5fc5f9 qmatteoq/testwebapp "dotnet TestWebApp.d…" 11 minutes ago Up 11 minutes 0.0.0.0:8080->80/tcp webapp 2ae1e86f8ca4 redis "docker-entrypoint.s…" 11 minutes ago Up 11 minutes 6379/tcp rediscache 2e2f08219f68 qmatteoq/webapitest "dotnet TestWebApi.d…" 11 minutes ago Up 11 minutes 80/tcp newsfeed We can notice how the configuration we have specified has been respected. All the containers have the expected names and only the one which hosts the web application is exposing the 80 port on the host. Now just open the browser again and point it to to see the usual website: If you want to stop your multi-container application, you can just use the docker-compose down command, instead of having to stop all the containers one by one: PS C:\Users\mpagani\Source\Samples\NetCoreApi> docker-compose down Stopping webapp ... done Stopping rediscache ... done Stopping newsfeed ... done Removing webapp ... done Removing rediscache ... done Removing newsfeed ... done Removing network netcoreapi_default As you can see, Docker Compose leaves the Docker ecosystem in a clean state. Other than stopping the containers, it also takes care of removing them and to remove the network bridge. If you run, in fact, the docker network ls command again, you won't find anymore the one called netcoreapi_default: a29b7408ba22 none null local In a similar way we have seen with Dockerfile and bulding images, also in this case Visual Studio Code makes easier to work with Docker Compose files. Other than providing IntelliSense and real-time documentation, we can right click on the docker-compose.yml file in the File Explorer and have direct access to the basic commands: Wrapping up In this post we have concluded the development of our multi-container application. We have added a new layer, a Redis cache, so that we have learned how to consume an existing image as it is. Then we have learned how, thanks to Docker Compose, it's easy to deploy a multi-container application and make sure that all the services are instantiated with the right configuration. You can find the complete sample project used during the various posts on GitHub. Happy coding!
https://blogs.msdn.microsoft.com/appconsult/2018/09/14/introduction-to-docker-deploy-a-multi-container-application/
CC-MAIN-2019-13
refinedweb
2,245
60.24
I am a very beginner. Tell me how to compile a very simple code of Java?? Type: Posts; User: alialomran I am a very beginner. Tell me how to compile a very simple code of Java?? Where r the errors? public class Q3_StudentID { public static void main(String[] args) { int x, y, my Num = 0; double d = 10 + x; x = 3; y = d + 2; if (! x>5) System.out.println('Result1 = '... if i want the answer where can I place the code? a. Write a Java class (program) that reads from the user the temperature in degrees Celsius as a real number. Then calculates and prints the equivalent temperature in degrees Fahrenheit (rounded to 1... Hi All I am very beginner to Java. I would like to ask if is it possible to ask a very elementary questions in Java? Thanks & Regards,
http://www.javaprogrammingforums.com/search.php?s=f9caf85280b4c70793aa73fc6f9ef140&searchid=1273580
CC-MAIN-2014-52
refinedweb
142
75.71
Upbeat about Updates More noteworthy items from the Linux audio world, including news about some long-awaited releases. MusE 1.0rc2 MusE has had an uneven development history. The project first took shape as an audio/MIDI sequencer with notation capabilities until Werner Schweer (MusE's original designer) extracted the notation parts and turned them into the excellent MuseScore (MScore). Alas, directed work on MusE slowed for a while, but now we have a revived and revivified MusE 1.0 rc2 (Figure 1). As the rc2 indicates, this version is the second release candidate, so if no further egregious bugs or annoyances are found, this version will serve as the 1.0 public release. The MusE developers encourage users to stress test this candidate as hard as possible. MusE follows a fairly typical design for a modern audio/MIDI sequencer. Its Linux-specific features include support for the ALSA and JACK audio/MIDI systems (including the recent JackMIDI), LADSPA and other plugin architectures, and the LASH session handler. As Figure 1 illustrates, audio data is represented by amplitude waveform displays, while MIDI data is ordered in a common piano-roll display. Multichannel I/O is supported for audio and MIDI streams, and device synchronization can be handled with JACK, MIDI clock or MMC (MIDI Machine Control) messages. Unfortunately, an in-depth review will have to wait. Meanwhile, the release candidate is ready for testers, so visit the Web site, download and install the package, then put it under some stress. And, be sure to file your reports to the developers; you'll sleep better knowing you've done the right thing. SuperCollider 3.3 The latest release of James McCartney's SuperCollider3 is luring me back into the fold. Version 3.3 is filled with new features and bug fixes, but its strongest attraction for me is its use of the SwingOSC software to provide a graphics component common to all SuperCollider-friendly platforms. This development is a work-in-progress with the eventual goal of supporting widget sets currently available only to Mac users, and when that goal is reached, it should be possible to run graphics-enabled SuperCollider code on any supported platform. Figure 2 shows an example GUI project with sliders for controlling the reverb, resonance and volume of a synthesizer design. The interface is built from Java graphics primitives and wears the familiar GTK look and feel (well, it's what Java thinks is a GTK look and feel). Communications between the GUI and the synthesizer is handled by OSC messages, and the audio output is immediately responsive to the slider movements. It's all very cool stuff, especially when one considers the large body of SuperCollider3 code that includes integral GUIs. And, I hope to see full support for more widget sets, such as the excellent ixiQuarks, a set of GUI components designed for live coding and other improvisational audio work. Other neat features include coding environment enhancements for emacs and vi/vim, MIDI I/O, better support for 64-bit environments and some new and/or improved ugens (SC3-speak for unit generators). For the complete list of improvements in SuperCollider3, see the Changelog in the source code's build directory. Speaking of source code, I built version 3.3 on an OpenSUSE 10.2 system with no troubles. SuperCollider3 utilizes the scons build system, so I simply ran this command to compile the program : scons QUARKS=1 PREFIX=/home/dlphilp/ Quarks are addons that provide a wide range of audio and graphics functions; you definitely want the build to include the quarks. The PREFIX has been set to my local home, although, of course, you can install SuperCollider anywhere you prefer. However, be sure you read the documentation regarding expected paths and locations, or you may find yourself with a complete but non-working installation. If you prefer a binary installation, you can download DEB and RPM packages from the SuperCollider3 Web site, or check your distribution's software repositories to see if they've updated to the latest and greatest version. I set up emacs to recognize and support SuperCollider code, established my JACK and SwingOSC settings in ~/.sclang.sc, opened a test file, evaluated it and got the results shown in Figure 2. Alas, I wasn't always so fortunate, and there still are many SC3 projects that don't run so easily in Linux. However, I'll cheerfully admit that I'm still learning how to use SuperCollider, and the remaining difficulties may well be due to this problematic user. There are a lot of neat features to explore in this release, so if you've considered getting into SuperCollider3, now is the time. Jackbeat The simple appearance of Olivier Guilyardi's Jackbeat (Figure 3) might deceive you. It seems to be a small-scale rhythm programmer, when, in fact, it's a powerful pattern sequencer. As its name implies, the program relies on JACK for its audio engine, but it also supports PortAudio, ALSA and CoreAudio (on OS X). Using Jackbeat is essentially the same as using a typical software drum machine, with the added attaction of full JACK transport capability. Thanks to that feature, Jackbeat can be used either as a standalone virtual rhythm box, or it can be integrated into a complex environment of other JACK-savvy and/or OSC-aware applications. Jackbeat requires a contemporary Linux distribution to meet its dependencies, but most modern systems should compile the program without complaint. However, if you've replaced JACK with your own build, be aware that Jackbeat's build process may fail at the linking stage with an error message about a missing libjack.la. Apparently, the waf build manager doesn't install (or even build?) the static components needed by the public Jackbeat source package. I'm happy to report that Olivier Guilyardi has fixed the problem in the latest SVN code, and I was able to compile and install Jackbeat 0.7.1 on my 64 Studio 3 beta box. As advertised, the program is very easy to use. Soundfiles are loaded into tracks, and the track boxes are checked (or not) to create a rhythm. Click the Play transport control, and the beat goes on with rock-steady performance. If you need a lightweight rhythm machine or soundfile sequencer, Jackbeat is definitely recommended for your Linux audio arsenal. Rationale Composers who write music in just intonation often are dissatisfied with the bias toward 12-tone equal temperament shown in most modern sequencers. Rising to the challenge, developer Chuckk Hubbard mixed together some Python, some wxWidgets bindings and a few hooks into the Csound API to create Rationale (Figure 4), a unique Csound-based audio sequencer for research and composition in just intonation. Figure 4 shows Rationale's use of ratios to represent justly tuned pitch sequences. This notation describes the ratio difference between the selected pitch level and the 1/1 baseline pitch referent. Ratio notation is preferred by many composers working with just intonation. The Rationale Web site has complete instructions for downloading, building and configuring the program. Its dependencies are lightweight, and I had no problems compiling Rationale from its source code. However, I received this show-stopping error message when I first started the program : dlphilp@64studio:~/rationale-0.2$ python rationale.py Traceback (most recent call last): File "rationale.py", line 40, in import csnd File "/usr/local/lib/python2.5/site-packages/csnd.py", line 7, in import _csnd ImportError: dynamic module does not define init function (init_csnd) I found the fix on the Csound mail list, thanks to composer/developer Victor Lazzarini. The csnd.py module wasn't finding the _csnd.so object, and the solution was a simple PATH export : export PYTHONPATH=$PYTHONPATH:/home/dlphilp/src/Csound5.10.1/:/usr/lib/ I added that line to my $HOME/.bashrc file, refreshed the bash shell (. .bashrc), and Rationale started without complaint. From that point, I experimented with the program, studied the useful docs and had a lot of fun. Making music in just intonation imposes certain constraints on chord progressions and harmony, but the purity of justly tuned intervals has attracted the interest of many composers, particularly the more experimental-minded ones. Famous names in this field of endeavor include Harry Partch, LaMonte Young, Lou Harrison and Terry Riley. Thanks to Chuckk's software, you too may be able to join that exalted company, or if you've ever simply wondered what's the big deal about just intonation, you can fire up Rationale and find out for yourself. SLV2 A note to developers and users of LV2 plugins: Dave Robillard has updated his SLV2 (Simple LV2) library to version 0.6.4. The Changelog for this release indicates that its urgency is rated low. There's no need to rush to install this version, but it might be a good idea to stay up to date with Dave's latest new features and bug fixes. By the way, if you decide to build the new SLV2, you will need to update the Waf build manager to version 1.5.6. midish 0.4.0 I love UNIX-style command-line programs and utilities. They're typically lightweight, have no GUI and work incredibly fast. However, the downside is that they also can be so packed with powerful options that usage can be rather complicated, even with man pages and other on-line help. Midish is one of those utilities that finds a balance between power and usability. Its Web page describes the program as ... a MIDI sequencer/filter driven by a command-line interpreter (like a shell). Once midish [is] started, the interpreter prompts for commands. Then, it can be used to configure MIDI devices, create tracks, define channel/controller mappings, route events from one device to another, play/record a song and so on. A look at the midish manual clarifies the extent of this little program's abilities. Again, stealing from the its Web page, here's the laundry list of midish's notable features : - Real-time MIDI filtering/routing (controller mapping, keyboard splitting and so on). - Track record (with optional metronome). - Track editing (insert, copy, delete and so on). - Progressive track quantization. - Handles multiple MIDI devices. - Synchronization to external MIDI devices. - Import/export standard MIDI files. - Tempo and time-signature changes. - Handles system-exclusive messages. The last-listed feature is of special interest to users who need to automate patch parameter changes within a MIDI data stream. Not all MIDI software supports sys-ex messages, so midish may well be your salvation if you need a reliable way to send bulk dumps or individual parameter updates to an external synthesizer or other MIDI device. Midish can be incorporated easily into scripts for controlling your MIDI devices during live shows and other real-time applications. It also can be used for non-real-time purposes, although its true purposes appear to be in the fields of live performance and recording. Alas, I've no time to explore midish further, so I'll just recommend it to my readers and let them discover its extensive capabilities. And, if you're already working with a Linux MIDI setup, you should try midish to see what it can do for your studio. Midish is proof that sometimes big things do arrive in small packages. LAC2009 On-line On April 23 Jörn Nettingsmeier sent a message to the LA* mail lists to announce the availability of video footage from the 7th annual Linux Audio Conference held this year at La Casa della Musica in Parma, Italy. In my last article, I mentioned that the videos are now on-line and are of excellent quality. Since then, the organizers have added more relevant material for all presentations, so with this wealth of documentation I decided to become a virtual attendee of LAC2009. The complete set of materials for each presentation is on-line at the page titled Linux Audio Conference 2009 - Slides, Papers, And More. The "And More" part is of particular interest to the virtual visitor, as it refers to the wonderful videos produced by the A/V team. I must emphasize the superb job done with these videos. They are not merely cropped versions of the original streams; they've been edited for crossfades, title sequences and other niceties associated with desktop video creation. High marks go to the production team for the superb results. I baked a couple of excellent pizzas to lend some Italian flavor to the experience, but alas, I had to do without the Lambrusco and prosciutto parmeggiano. Nevertheless, I had a fine time and learned many new things. Deep gratitude goes out to Frank Neumann, Jörn Nettingsmeier, Marije Baalman, Robin Gareus, the staff of La Casa della Musica, and most of all, to the master of ceremonies Fons Adriaensen. Well done, sir, well done. The future of Linux audio is full of exciting projects carried on by a most talented and imaginative crew of developers. If you doubt it, take the time (anytime, anywhere) to attend LAC2009. Outro. LAC2009 Vids Many thanks for the link to the LAC2009 videos. I've been perusing them to my enjoyment. I particularly enjoyed the keynote by John_ffitch--thought that some very pertinent questions were raised. midish, pd, wxWindows Thanks for the updates! I looked at midish a while ago - indeed it looks powerful, but takes some getting-used-to. Nowadays, I'm using Pure Data (PD) for simple MIDI manipulation: setting 'audiobuf' and 'sleepgrain' to low values (experiment a bit) makes for better latencies. Oh, one thing: wxWindows has been renamed to wxWidgets for some time now :). Author's reply Hey raboof, thanks for reading and for the correction. I've changed the wxWindows reference to wxWidgets and added its URL to make a link. Best, dp Similis sum folio de quo ludunt venti.
http://www.linuxjournal.com/content/upbeat-about-updates?quicktabs_1=2
CC-MAIN-2015-40
refinedweb
2,317
53.92
0 #include <iostream> #include <string> #include <windows.h> using std:: cout; using std:: cin; using namespace std; int main() { unsigned long n; char Answer; string mystr; do { cout << " What is your name? "; getline (cin, mystr); cout << "Hello " << mystr << ".\n"; cout << "What is one of your hobbies? "; getline (cin, mystr); cout << "Cool, I also like " << mystr << "!\n"; cout << "Can you guess my name? "; getline (cin, mystr); cout << "Haha, good guess " << mystr << " that is funny, trick question I do not have a name! \n"; cout << "Do you like parkour?(y/n)? "; cin >> Answer; if(Answer == 'y' || Answer == 'Y') cout << "\nYou have been promoted to cool.\n"; if(Answer == 'n' || Answer == 'N') cout << "\nDo you know what it is?(y/n)? "; cin >> Answer; if(Answer == 'y' || Answer == 'Y') cout << "\nGood."; if(Answer == 'n' || Answer == 'N') cout << "\nGo and Google it, then talk to me again."; cout << " I am tired, I will go to sleep now. "; cout << "Do you want to do it again? (y/n)"; } while ( Answer = n); return 0; } I am writing a program, this is my first program not copied out of a book. I am trying to loop from the while back up to the do. I do not know if this is the correct form of a loop to use because I am a noob. All help and pointers will help :) *edit* When i run it and get to the end it meshes together and asks 2 questions at the same time
https://www.daniweb.com/programming/software-development/threads/380291/i-need-help-learning-how-to-do-a-loop-
CC-MAIN-2016-18
refinedweb
243
92.22
rich:autocomplete - is it desperately missing itemValue= ?Brendan Healey Feb 21, 2012 9:30 AM I'm really having problems getting rich:autocomplete to work with what I consider a typical use case. I think part of the problem is that I can't find an example, whether in the showcase or in the practical richfaces 2 book, where the value selected by the user is used in any way. Nowhere will you find the value= attribute used, or the selected item used in any other way (i.e. in a value change listener). The basic problem is that everything seems to have been thought about, except storing a reference to what you've selected. A familiar scenario is using f:selectItems like this: @Entity public class Team { @Id long id; String name; } ... public Team selectedTeam; public List<Team> teamList = new ArrayList<>(); ... teamList.add(new Team(1, "Liverpool")); teamList.add(new Team(2, "Chelsea")); teamList.add(new Team(2, "Manchester United")); ... <h:selectOneMenu <f:selectItems </h:selectOneMenu> You have a List<Team>, selectItems displays the itemLabel, in this case the team name, and on selection you store a reference to the Team object in selectedTeam. If you wanted to you could easily store the primary key id instead. All pretty straightforward although a converter is required. A rich:autocomplete version of the above example could look like this: rich:autocomplete <rich:column> <h:outputText </rich:column> <a4j:ajax </rich:autocomplete the teamSearch() method returns a List<Team> where the characters typed into autocomplete input match in some way the team name. If I type in "Liver" we get one row returned. What is supposed to happen is that the value of fetchValue="#{var.name}" replaces any characters typed into the input, so "Liverpool". But fetchValue is also used for the value attribute, so we'll get a ClassCastException trying to cast a String to a Team (selectedTeam). If you add a converter, the getAsObject method will be called with fetchValue (the name) as the third parameter, which is probably not what you want. So the problem is that you want to internally reference the object by the primary key, but display friendly text to the user, but this seems impossible with the current offering. Just adding the itemValue attribute would solve the immediate problem, so that there's a difference between the text displayed in the input box when you make a selection (fetchValue), and the component value (itemValue). Any thoughts? I've got to say this is totally unusable for me right now, unless I've missed something really obvious. Regards, Brendan. 1. Re: rich:autocomplete - is it desperately missing itemValue= ?ibstmt Feb 22, 2012 1:50 PM (in response to Brendan Healey) This is an example from the demo page. I think "autocompleteList" is what you need. < rich:autocomplete 2. Re: rich:autocomplete - is it desperately missing itemValue= ?Brendan Healey Feb 22, 2012 4:23 PM (in response to ibstmt) Hi, I don't see how, it just accesses a list directly rather than calling a method that returns a list. The fundamental problem remains as far as I can see. Regards, Brendan. 3. Re: rich:autocomplete - is it desperately missing itemValue= ?ibstmt Feb 23, 2012 8:29 AM (in response to Brendan Healey) My apologies -- I use ajax to get the selected item, but I am using RF 3.3. Anyway, there are older threads about the problem you're having: 4. Re: rich:autocomplete - is it desperately missing itemValue= ?Christian Peter Feb 23, 2012 9:46 AM (in response to ibstmt) Hmmm, you'll have to write your own converter since the rich:autocomplete component only handles strings properly. 5. Re: rich:autocomplete - is it desperately missing itemValue= ?Brendan Healey Feb 23, 2012 1:32 PM (in response to Christian Peter) Thanks for the pointers, I spent hours going through the forum reading everything I could find on autocomplete before using it. I've still not seen anything that helps me do what I want. I use a converter but it doesn't solve the problem as it getAsObject is passed fetchValue, which has to be meaningful to the user. The functionality exists to display composite data in the popup with rich:column, say Country & Telephone Number. The information is stored in the Numbers table with an auto-generated primary key and a unique composite index on both columns. In the UI you can selected the country from a selectOneMenu before typing in the telephone number - to reduce server load. I type in a few characters in the autocomplete, get a result list and select number 12345 6789012 - which exists in 15 countries. This has got to be what is displayed back to the user in the input field after selection - the phone number. I could include the country, and then re-search the database to get the primary key, but we've already done this lookup. So I still don't get how to get the id of the selected entry, and I'm pretty sure you can't do it without a load of javascript hacks, but if I've misunderstood this I'll happily eat my words. Regards, Brendan. 6. Re: rich:autocomplete - is it desperately missing itemValue= ?Christian Peter Feb 23, 2012 1:45 PM (in response to Brendan Healey) That's what I observed with my evaluation of RF4.x (migrating from RF3.3.3): it seems not possible to submit the contents (object or some attribute) of the iteratorvariable ("var") of the autocompletes inner table. By using some javascript hacks (setting an invisible input value or a specific styleClass to the ID or UUID of my object / entity) I was able to retrieve the corresponding object / entity by looking it up again, but this was noch a very *nice* approach. 7. Re: rich:autocomplete - is it desperately missing itemValue= ?Brendan Healey Feb 23, 2012 2:39 PM (in response to Christian Peter) Thanks Christian, I was starting to think I'd missed something really obvious. I will take a look at the jira, and vote for it of course. I wrote an autocomplete custom component, but did it take a lot of doing, and there's no rich:column child tags type functionality. Whilst it was a useful educational exercise I'd rather get back to my core responsibilities. The other option is using a jQuery autocomplete with a servlet back end for retrieving the data I suppose, but I didn't have a lot of success finding a really good one, and the evaluation process is pretty time consuming. Regards, Brendan. 8. Re: rich:autocomplete - is it desperately missing itemValue= ?Christian Peter Feb 23, 2012 3:23 PM (in response to Brendan Healey) It's very time consuming, indeed. I've also spend almost two workdays search for a replacement for the rich:suggestionBox. Loading everything to the client (like 10k of articels or customers) is not an option. Maybe I'll try the servlet approach as a new prototype. 9. Re: rich:autocomplete - is it desperately missing itemValue= ?Brendan Healey Feb 27, 2012 10:25 AM (in response to Christian Peter) Christian, the thought occurred to me that you could just use a4j:jsFunction with the data= attribute to return JSON encoded data to a jQuery autocomplete component, rather than use a servlet. Just an idea... 10. Re: rich:autocomplete - is it desperately missing itemValue= ?Felix G. Aug 22, 2012 4:19 AM (in response to Brendan Healey) Is there any chance that richfaces become usefull in the next ten years? The showcase suggerate a lot of thing that I wanted to do, but in reality nothing of this works. The examples in the showcases are trivial, because they are all about String. There no no usage of more complex objects, which would be the real use case. I am doomed with richfaces at this project, it is like flying to the moon with a paper plane. The customer says "I want this, and this and it have to look like this" and my answer is "Well, we can show up text. And buttons, great!". Application development like 50 years ago... 11. Re: rich:autocomplete - is it desperately missing itemValue= ?ibstmt Aug 28, 2012 2:52 PM (in response to Felix G.) The problem is that some functionality was discarded in the switch to 4.0. The suggestion box used to work with objects just fine. Now it doesn't. In our shop, migrating to 4.0 isn't an option until certain show-stoppers (like this one) are fixed. Nobody has time to create new solutions to problems that didn't exist in 3.3. 12. Re: rich:autocomplete - is it desperately missing itemValue= ?Brendan Healey Aug 28, 2012 8:41 PM (in response to ibstmt) Ok, I'll take a look at what needs to be done to implement this myself, it's apparent that the Jboss team are encouraging this kind of thing to happen. I guess it begins with the contributor getting started guide, I hope I can make sense of it. 13. Re: rich:autocomplete - is it desperately missing itemValue= ?ibstmt Aug 29, 2012 8:47 AM (in response to Brendan Healey) That is much appreciated! And it is not like we are asking for a new feature -- we are asking for something that used to work prior to 4.0. 14. Re: rich:autocomplete - is it desperately missing itemValue= ?Brendan Healey Aug 30, 2012 6:55 AM (in response to ibstmt) >The suggestion box used to work with objects just fine. ibstmt - I had a look at the RF3 showcase example for rich:suggestionBox, and as I said in the first paragraph of my OP, the value representing #{result} is never passed to a backing bean. Could you tell me how you do it now please? does fetchValue="#{result}" get written into value="#{bean.selectedItem}"?;jsessionid=E21D8B7185D25B4E3791F7FFF1051548?c=suggestionBox&tab=usage Thanks, Brendan. p.s. I see the JIRA linked by Christian now has 13 votes, but please vote for it if you haven't already.
https://developer.jboss.org/message/718894?tstart=0
CC-MAIN-2015-18
refinedweb
1,684
62.98
OpenTelemetry Java: All you need to know by Ted Young Explore more OpenTelemetry Blogs Hi all, tedsuo here, back with our third installment of All you need to know. Today we’re going to go over Java, and how to instrument it. Getting started with OpenTelemetry Java: TL;DR All you need to know is: - Initialization: How to attach the OpenTelemetry Java Agent. - Tracer methods: getTracer, currentSpan, startSpan, and setCurrent. - Span methods: setAttribute, addEvent, recordException, setStatus, and end. Seriously, that’s it. If you want to try it out, follow the guide below. A heavily commented version of the finished tutorial can be found at. Consider walking through this tutorial with the code open. Hello, world For this tutorial, we’re going to make a very, very simple application: a web servlet that responds to with “Hello World.” Let’s make the application. The hello world server has two pieces. The Jetty handler looks like this: package com.lightstep.examples.server; { // pretend to do work try { Thread.sleep(500); } catch (InterruptedException e) { } // respond try (PrintWriter writer = res.getWriter()) { writer.write("Hello World"); } } } } Set up and run your jetty server: package com.lightstep.examples.server; import org.eclipse.jetty.server.Server; import org.eclipse.jetty.server.Handler; import org.eclipse.jetty.server.handler.ContextHandlerCollection; public class App { public static void main( String[] args ) throws Exception { ContextHandlerCollection handlers = new ContextHandlerCollection(); handlers.setHandlers(new Handler[] { new ApiContextHandler(), }); Server server = new Server(9000); server.setHandler(handlers); server.start(); server.dumpStdErr(); server.join(); } } To talk to this server, create a simple client that makes 5 calls to /hello. package com.lightstep.examples.client; import java.io.IOException; import okhttp3.OkHttpClient; import okhttp3.Request; import okhttp3.Response; public class App { public static void main( String[] args ) throws Exception { for (int i = 0; i < 5; i++) { makeRequest(); } } static void makeRequest() { OkHttpClient client = new OkHttpClient(); Request req = new Request.Builder() .url("") .build(); try (Response res = client.newCall(req).execute()) { System.out.println("make request”); } catch (Exception e) { System.out.println(String.format("Request failed: %s", e)); } } } Boot up the server and check that it works, then try running the client against it. Java Agent using the. For now, just download the launcher jar file from the latest release of the launcher. Run your application with OpenTelemetry export LS_ACCESS_TOKEN=my-access-token-etc java -javaagent:lightstep-opentelemetry-javaagent-0.11.0.jar \ -Dls.service.name=hello-server \ -Dotel.propagators=tracecontext,b3 \ -Dotel.resource.attributes="something=else,container.name=my-container" \ -Dotel.bsp.schedule.delay.millis=200 \ -cp server/target/server-1.0-SNAPSHOT.jar \ com.lightstep.examples.server.App java -javaagent:lightstep-opentelemetry-javaagent-0.11.0.jar \ -Dls.service.name=hello-client \ -Dotel.propagators=tracecontext,b3 \ -Dotel.bsp.schedule.delay.millis=200 \ -cp client/target/client-1.0-SNAPSHOT.jar \ com.lightstep.examples.client.App Switch over to Lightstep, or your backend of choice, and confirm the spans were received: Yup, we see spans. Click through and look at a trace: Notice that we see a client span from hello-client, a server span from hello-server, and several internal spans representing HTTP client and server Java API Ok, so the out-of-the-box experience will get you a long way, but of course, you will eventually want to add additional application data. Spans should ideally be managed by your application framework. In this case, the servlet). Import the OpenTelemetry API itself, then get a new tracer in your ApiContextHandler: import io.opentelemetry.api.trace.Tracer;"); The name of the tracer appears on every span as the instrumentation.name attribute. This is useful for investigating instrumentation issues. We can then get our current span from the context by requesting it by calling Span.current(), as seen in the below server code - attributes and events can be added to whatever the “current” span is after you have a handle on the span in context. Run your server and client again, and you will see these new attributes and events show up on the same spans. package com.lightstep.examples.server; import io.opentelemetry.api.OpenTelemetry; import io.opentelemetry.api.common.AttributeKey; import io.opentelemetry.api.common.Attributes; import io.opentelemetry.api.trace.Span; import io.opentelemetry.api.trace.StatusCode; import io.opentelemetry.api.trace.Tracer; import io.opentelemetry.context.Scope; { // access the current span that has automatically been created by // the servlet instrumentation Span span = Span.current(); // define the route name using semantic conventions span.setAttribute("http.route", "hello"); // pretend to do work try { Thread.sleep(500); } catch (InterruptedException e) { } try (PrintWriter writer = res.getWriter()) { // events are structured logs, contextualized by the trace. childSpan.addEvent("writing response", Attributes.of(AttributeKey.stringKey("content"), "hello world")); writer.write("Hello World"); } } } Creating your own spans You can also create your own spans. These spans will automatically become children of the current span and added to the trace. Span management involves three steps: starting the span, setting it as the current span, and ending the span. When you create a new span in Java, OpenTelemetry will create it as a child of the current span, if one exists. Call the spanBuilder method on your tracer to start a new one. Name the span after the operation you are measuring. Advice on naming can be found in the tracing specification. *IMPORTANT: make sure to end the span when your operation finishes, or you will have a leak! * After creating a new span, use a Scope to create a new block of code where the child span is the current one. Any calls to Span.current() inside this scope will return your child span, rather than the parent for the request. Other methods continue to work as normal. Once you’re finished, be sure to close your span by calling the end() method on it. After span.end() is called, Spans are queued up to be exported in the next flush. Calls to setAttribute and addEvent become no-ops after span.end() is called."); // start a child span Span childSpan = tracer.spanBuilder("my-server-span").startSpan(); try (Scope scope = childSpan.makeCurrent()) { // inside the new scope, getCurrentSpan returns childSpan. // note that span methods can be chained. Span.current().setAttribute("ProjectId", "456"); } finally { // make sure to close the span childSpan.end(); } } Should you create new spans in this way? It depends on the organization and size of your service, primarily. If you have a significant amount of work that needs to be measured independently of the overall work being performed, then adding child spans to your code can be useful. However, it can often be easier and more beneficial to simply add more events or attributes to a single span rather than creating many smaller ones per-service. Error Handling There is one final type of event that deserves special attention: exceptions. In OpenTelemetry, exceptions are recorded as events. But, to ensure that the exception is properly formatted, the span.recordException(error) method should be used instead of addEvent. childSpan.recordException(new RuntimeException("oops")); childSpan.setStatus(StatusCode.ERROR); Java. Hopefully, that was pretty straight forward and clears up any mysteries about how to use OpenTelemetry. If you stick with the above patterns, you can get a great deal of visibility with very little work. Of course, there are many more details and options; you can check out the.
https://lightstep.com/blog/opentelemetry-java/
CC-MAIN-2021-04
refinedweb
1,212
52.05
CR making a library app, you have created a database. so you don’t want to add manually in database. You don’t want to update or delete by going to database table and press delete there. So you want to make a platform where you can create entry for books. Like a form. You will enter everything about books, like book name, isbn number, book author. After creating, now you want to view all the books and their details. After viewing, you saw book name is wrongly typed, you update the book name. At last, if any book is out dated, than you have to delete that book. so you delete the book. In this django tutorial, I will be making a django crud application. Django CRUD Application : I am not making class based view but function based view. So everything in this article will be function based view. First things first. Create a project, you can name it whatever you like. I name it sms. School Management System django-admin startproject sms Now lets create an app named student. Because I am creating database for student. All the details of student will be there. python manage.py startapp student Lets start with Create View. Django CRUD App : Create View and Read View Go to models.py in your student app and add these following lines. from django.db import models from django.urls import reverse # Create your models here. class Student(models.Model): student_name = models.CharField(max_length=100) father_name = models.CharField(max_length=100) date_of_admission = models.DateTimeField(auto_now_add=True) def __str__(self): return self.student_name def get_absolute_url(self): return reverse('student-detail', kwargs={'id':self.id}) Now migrate. python manage.py makemigrations python manage.py migrate You must be thinking, what is the function get_absolute_url(self): its the function to go to the detail of student and ‘student-detail’ is the name of the url path in urls.py. Go to admin.py of your app folder. and add these following lines : # Register your models here. from .models import Student admin.site.register(Student) Now add following lines to your projects urls.py. we have to configure urls.py file. it is very important. sms/urls.py ( This is main project’s urls.py) from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('student/', include('student.urls')), # add this ] now create urls.py in your student app and add these following. student/urls.py ( This is app’s urls.py) from django.urls import path from . import views urlpatterns = [ path('', views.student_list, name='student-list'), path('<int:id>/', views.student_detail, name='student-detail'), path('create_student/', views.student_form, name='student-form'), path('update_student/<int:id>/', views.update_form, name='update-student'), path('delete_student/<int:id>/', views.delete_student, name='delete-student'), ] Don’t worry about all the scary looking paths. You will understand everything once i finish this tutorial. Now go to views.py file in your app student. and add following codes. from django.shortcuts import render, get_object_or_404, redirect from .models import Student def student_list(request): stu = Student.objects.all() context= { 'stu':stu } return render(request, 'student/student_list.html', context) This is detail views : def student_detail(request, id): student_query = get_object_or_404(Student, id = id) context = { 'student_query': student_query } return render(request, 'student/student_detail.html', context) Now add templates folder in your student app. After creating templates folder, now create student folder inside templates folder. Now create student_list.html and student_detail.html Go to student_list.html and add these following lines inside <body> </body> tag : Above code will display all the students available in the database. Now go to student_detail.html and add these following lines. These lines will be under <body> </body> Create View : We have to make form to make create view, just like we create in admin panel, but we can’t give access of admin panel to everyone. we must need to make another page, where other user can enter information to database and view from there, update from there and delete from there. create forms.py in your app’s folder. student/forms.py add these following lines. from django import forms from .models import Student class StudentForm(forms.ModelForm): class Meta: model = Student fields = '__all__' Now go to views.py and add these following lines : from .forms import StudentForm # you must import this from form.py def student_form(request): form = StudentForm(request.POST) if form.is_valid(): form.save() return redirect('/student/') context = { 'form' : form } return render(request, 'student/create_student.html', context) Now you must create a html file inside your app. Go to templates, than go to student, create a html file name it create_student.html create_student.html Inside <body> </body> add these following lines. def update_form(request, id): student = Student.objects.get(id=id) form = StudentForm(instance=student) if request.method == 'POST': form = StudentForm(request.POST, instance=student) form.save() return redirect('/student/') context ={ 'form':form } return render(request, 'student/create_student.html', context) def delete_student(request, id): student = Student.objects.get(id=id) if request.method == 'POST': student.delete() return redirect('/student/') context = { 'del_student':student } return render(request, 'student/student_delete.html', context) Conclusion : Django CRUD Application I have not done any styling in this article, you must do some styling using bootstrap and css. That’s up to you. I have showed you how to create django CRUP application. I hope you have learnt this and apply this to your project. this is just a demo app. Also read : Make Password generator with Django 3.0 Also read : Python facts you must know about .
http://www.geekyshadow.com/2020/07/01/django-crud-application-django-3/
CC-MAIN-2020-40
refinedweb
926
55.3
Aaron, When you mention "default" namespace, are you instead trying to use a "no namespace" arena? XML documents and fragments with no target namespace prefix are allocated to the "no namespace" arena. A namespace_name can be associated with more than one prefix. The use of any of the prefixes can then validate to a proper namespace. Namespaces are bound to prefixes by the XML Parser. A Stylesheet can have its own namespace prefix bindings that are emitted by the XML output method. Nested namespace (xmlns:prefix='namespace_name') assignments are allowed. Parent namespaces that share a prefix that has been redefined in the child element are not available within the scope of the child element. Any namespace (xmlns:prefix) mappings in the parent are inherited by the child, but can be overwritten by the child context. Xalan now has the ability to associate parsed XML documents with a top-level parameter name. Documents are parsed using the Xerces-C XML Parser library. The way a resolver works is to map a URL or system file pathname to a resource. The resolver then pushes a reader onto the XML Reader stack to start reading from the new object (XML Document or fragment). The XPath document() function returns a parsed nodeset object. It uses the installed entity resolver that performs the above mentioned operation by submitting a bytestream of data to an XML parser so that a parsed nodeset object can be returned by document(). Sincerely, Steven J. Hathaway > Hi, > > I need to get some values from a number of xml sources that have default > namespaces. I've implemented the code from the sample SimplXPathAPI and > got > this to work if I removed the default namespaces but, if I leave them in, > selectSingleNode returns NULL. I was able to remove the namespaces for my > test data (or add prefixes), but unfortunately I will not be able to do > this > with the real data. > > I've done an exhaustive search of the xalan-c forums and the internet in > general for the past several days and I've come up with a few things but I > don't think I'm heading in the right direction any longer. I've > implemented > a PrefixResolver derivative (myPrefixResolver) and I'm passing an instance > of that into the XPathEvaluator, so the XPath implementation can map the > namespace prefix to the proper namespace URI. The problem is that I'm > having trouble mapping the prefix to the namespace. > > Optimally I would like to have a function in my PrefixResolver derivative > that takes a prefix and a uri and adds it to my namespace. > > I've come up with the following (and many other variations) but it crashes > when I try to set the node value. > > void > myPrefixResolver::setPrefix(const XalanDOMString& prefix, const > XalanDOMString& uri) > { > AttributeVectorType myVector(XalanMemMgrs::getDefaultXercesMemMgr(), 1); > myVector.front()->setNodeValue(uri); > > m_namespaces.insert(&prefix, myVector); > } > > I've tried many things including tying to clone an existing XalanNode and > creating my vector this way. I haven't deviated much from the way > XalanDocumentPrefixResolver derived from PrefixResolver. I am a bit > desperate at this point because I have a very tight schedule and I thought > this part would be the easy part. > > Any help that can be proffered would be extremely appreciated! > > Thanks, > Aaron > > > > > > > > -- > View this message in context: > > Sent from the Xalan - C - Users mailing list archive at Nabble.com. > >
http://mail-archives.apache.org/mod_mbox/xalan-c-users/201309.mbox/%3C28553.159.121.180.80.1380136614.squirrel@webmail.iinet.com%3E
CC-MAIN-2016-40
refinedweb
563
53.31
The QNetworkAccessManager class allows the application to send network requests and receive replies More... #include <QNetworkAccessManager> This class is not part of the Qt GUI Framework Edition. Inherits QObject. cache that is used to store data obtained from the network. This function was introduced in Qt 4.5. See also setCache().(), and(), and deleteResource(). Posts a request to obtain the network headers for request and returns a new QNetworkReply object which will contain such headers. The function is named after the HTTP request associated (HEAD).(), and deleteResource(). This is an overloaded function. Sends the contents of the data byte array to the destination specified by request.() and post(). This is an overloaded function. Sends the contents of the data byte array to the destination specified by request.. QNetworkAccessManager will set the parent of the cookieJar passed to itself,().
https://doc.qt.io/archives/4.6/qnetworkaccessmanager.html
CC-MAIN-2021-39
refinedweb
138
60.82
Welcome to the fifth installment of "Twisted Web in 60 seconds". In the previous installment, I demonstrated how a Twisted Web server can decide how to respond to requests based on dynamic inspection of the request URL. In this installment, I'll show you how to extend such dynamic dispatch to return a 404 (not found) response when a client requests a non-existent URL. As in the previous installments, we'll start with Site, Resource, and reactor imports (see the first and second installments for explanations of these): from twisted.web.server import Site from twisted.web.resource import Resource from twisted.internet import reactor Next, we'll add one more import. NoResource is one of the pre-defined error resources provided by Twisted Web. It generates the necessary 404 response code and renders a simple html page telling the client there is no such resource. from twisted.web.error installment, the only other thing left to do is the normal Site and reactor setup. Here's the complete code for this example: from twisted.web.server import Site from twisted.web.resource import Resource from twisted.internet import reactor from twisted.web.error) reactor.listenTCP(8880, factory) reactor.run() This server hands out the same calendar views as the one from the previous installment, but it will also hand out a nice error page with a 404 response when a request is made for a URL which cannot be interpreted as a year. Next time I'll show you how you can define resources like NoResource yourself. Hello, thank you for the great work and the twisted framework. I occasionally tried twisted with good results but sometimes stuck at some points. Now in my last year of study i try to completly switch to twisted (python) for my future work. Your blog helps a lot doing this. Thanks First off, thanks for this series. I've been getting involved with twisted and just found your blog. I'm catching up on the articles and it's been filling in a bunch of little gaps in my knowledge, so many thanks. A couple suggestions for other related topics to blog on: * Understanding deferreds * Authentication * Perspective Broker In any case, on to my question. I'm noticing a trend with your tutorials, as well as other folks examples that the general approach seems to be to create one Resource for the logic on deciding what page to hand out, and yet a different resource for each actual page (or related pages). Is this the "common" approach? Why not incorporate the render_* methods into the logic Resource? Thanks again for the series. I liked the information you contained in it. I will bookmark your site to check if you write more about in the future. Many thanks! Thanks for sharing this site.
https://as.ynchrono.us/2009/09/twisted-web-in-60-seconds-error_22.html
CC-MAIN-2020-16
refinedweb
471
65.93
Padre::Plugin::Moose - Moose, Mouse and MooseX::Declare support for Padre cpan Padre::Plugin::Moose Then use it via Padre, The Perl IDE. Press F8. Once you enable this Plugin under Padre, you'll get a brand new menu with the following options: Opens up a user-friendly dialog where you can add classes, roles and their members. The dialog contains a tree view of created class and role elements and a preview of the generated Perl code. It also contains links to Moose online references. Provides the ability to change the operation type (Moose, Mouse or MooseX::Declare) and toggle the usage of namespace::clean, comments and sample usage code generation. Moose/Mouse and MooseX::Declare keywords are highlighted automatically in any Perl document. The operation type determines what to highlight. Please report any bugs or feature requests to bug-padre-pluginose.
http://search.cpan.org/dist/Padre-Plugin-Moose/lib/Padre/Plugin/Moose.pm
CC-MAIN-2014-42
refinedweb
143
54.93
You are in an environment where you do not have access to a built-in, cryptographically strong pseudo-random number generator. You have obtained enough entropy to seed a pseudo-random generator, but you lack a generator. For general-purpose use, we recommend a pseudo-random number generator based on the AES encryption algorithm run in counter (CTR) mode (see Recipe 5.9). This generator has the best theoretical security assurance, assuming that the underlying cryptographic primitive is secure. If you would prefer a generator based on a hash function, you can run HMAC-SHA1 (see Recipe 6.10) in counter mode. In addition, the keystream of a secure stream cipher can be used as a pseudo-random number generator. Stream ciphers are actually cryptographic pseudo-random number generators. One major practical differentiator between the two terms is whether you are using the output of the generator to perform encryption. If you are, it is a stream cipher; otherwise, it is a cryptographic pseudo-random number generator. Another difference is that, when you are using a stream cipher to encrypt data, you need to be able to reproduce the same stream of output to decrypt the encrypted data. With a cryptographic PRNG, there is generally no need to be able to reproduce a data stream. Therefore, the generator can be reseeded at any time to help protect against internal state guessing attacks, which is analogous to rekeying a stream cipher. The primary concern with a good cryptographic PRNG at the application level is internal state compromise, which would allow an attacker to predict its output. As long as the cryptographic algorithms used by a PRNG are not broken and the generator is not used to produce more output than it is designed to support, state compromise is generally not feasible by simply looking at the generator's output. The number of outputs supported by a generator varies based on the best attacks possible for whatever cryptographic algorithms are in use. The risk of state compromise is generally not a big deal when dealing with something like /dev/random, where the generator is in the kernel. The only way to compromise the state is to be inside the kernel. If that's possible, there are much bigger problems than /dev/urandom or CryptGenRandom( ) producing data that an attacker can guess. In the user space, state compromise may be more of an issue, though. You need to work through the threats about which you are worried. Threats are likely to come only from code on the local machine, but what code? Are you worried about malicious applications running with the same permissions being able to somehow peer inside the current process to get the internal state? If so, perhaps you should have a separate process that only provides entropy and runs with a set of permissions where only itself and the superuser would be a concern (this is the recommended approach for using the EGADS package discussed in Recipe 11.8). If state compromise is a potential issue, you might have to worry about more than an attacker guessing future outputs. You might also have to worry about an attacker backtracking, which means compromising previous outputs the generator made. Reseeding the generator periodically, as discussed in Recipe 11.6, can solve this problem. At best, an attacker should only be able to backtrack to the last reseeding (you can reseed without new entropy to mix in). In practice, few people should have to worry very much about state compromise of their cryptographic PRNG. As was the case at the operating system level, if such attacks are a realistic threat, you will usually have far bigger threats, and mitigating those threats will help mitigate this one as well. There is a lot that can go wrong when using a pseudo-random number generator. Coming up with a good construct turns out to be the easy part. Here are some things you should closely consider: Pseudo-random number generators need to be seeded with an adequate amount of entropy; otherwise, they are still potentially predictable. We recommend at least 80 bits. See the various recipes elsewhere in this chapter for information on collecting entropy. Be careful to pay attention to the maximum number of outputs a generator can produce before it will need to be reseeded with new entropy. At some point, generators start to leak information and will generally fall into a cycle. Note, though, that for the configurations we present, you will probably never need to worry about the limit in practice. For example, the generator based on AES-128 leaks a bit of information after 264 16-byte blocks of output, and cycles after 2128 such blocks. When adding entropy to a system, it is best to collect a lot of entropy and seed all at once, instead of seeding a little bit at a time. We will illustrate why by example. Suppose that you seed a generator with one bit of entropy. An attacker has only one bit to guess, which can be done accurately after two outputs. If the attacker completely compromises the state after two outputs, and we then add another bit of entropy, he can once again guess the state easily. If we add one bit 128 times, there is still very little security overall if the generator state is compromised. However, if you add 128 bits of entropy to the generator all at once, an attack should essentially be infeasible. If an attacker can somehow compromise the internal state of a pseudo-random number generator, then it might be possible to launch a backtracking attack, where old generator outputs can be recovered. Such attacks are easy to thwart; see Recipe 11.6. In the following three subsections, we will look at three different techniques for pseudo-random number generators: using a block cipher such as AES, using a stream cipher directly, and using a cryptographic hash function such as SHA1. If you are in an environment where you have use of a good block cipher such as AES, you have the makings of a cryptographically strong pseudo-random number generator. Many of the encryption modes for turning a block cipher into a stream cipher are useful for this task, but CTR mode has the nicest properties. Essentially, you create random outputs one block at a time by encrypting a counter that is incremented after every encryption operation. The seed should be at least as large as the key size of the cipher, because it will be used to key a block cipher. In addition, it is useful to have additional seed data that sets the first plaintext (counter) value. Our implementation is based on the code in Recipe 5.5 and has two exported routines. The first initializes a random number generator: void spc_bcprng_init(SPC_BCPRNG_CTX *prng, unsigned char *key, int kl, unsigned char *x, int xl); This function has the following arguments: Pointer to a context object that holds the state for a block cipher-based PRNG. The caller may allocate the context object either dynamically or statically; this function will initialize it. Buffer that should contain entropic data. This data is used to key the block cipher, and it is the required portion of the seed to the generator. Length of the key buffer in bytes; must be a valid value for the algorithm in use. Buffer that may contain extra seed data, which we recommend you use if you have available entropy. If the specified size of this buffer is zero, this argument will be ignored. Note that if the buffer is larger than SPC_BLOCK_LEN (see Recipe 5.5) any additional data in the buffer will be ignored. Therefore, if you have sparse amounts of entropy, compress it to the right length before calling this function, as discussed in Recipe 11.16. Length of the extra seed buffer in bytes. It may be specified as zero to indicate that there is no extra seed data. Once you have an instantiated generator, you can get cryptographically strong pseudo-random data from it with the following function: unsigned char *spc_bcprng_rand(SPC_BCPRNG_CTX *prng, unsigned char *buf, size_t l); This function has the following arguments: Pointer to the generator's context object. Buffer into which the random data will be written. Number of bytes that should be placed into the output buffer. This function never fails (save for a catastrophic error in encryption), and it returns the address of the output buffer. Here is an implementation of this generator API, which makes use of the block cipher interface we developed in Recipe 5.5: /* NOTE: This code should be augmented to reseed after each request /* for pseudo-random data, as discussed in Recipe 11.6 /* #ifndef WIN32 #include <string.h> #include <pthread.h> #else #include <windows.h> #endif /* if encryption operations fail, you passed in a bad key size or are using a * hardware API that failed. In that case, be sure to perform error checking. */ typedef struct { SPC_KEY_SCHED ks; unsigned char ctr[SPC_BLOCK_SZ]; unsigned char lo[SPC_BLOCK_SZ]; /* Leftover block of output */ int ix; /* index into lo */ int kl; /* The length of key used to key the cipher */ } SPC_BCPRNG_CTX; #ifndef WIN32 static pthread_mutex_t spc_bcprng_mutex = PTHREAD_MUTEX_INITIALIZER; #define SPC_BCPRNG_LOCK( ) pthread_mutex_lock(&spc_bcprng_mutex); #define SPC_BCPRNG_UNLOCK( ) pthread_mutex_unlock(&spc_bcprng_mutex); #else static HANDLE hSpcBCPRNGMutex; #define SPC_BCPRNG_LOCK( ) WaitForSingleObject(hSpcBCPRNGMutex, INFINITE) #define SPC_BCPRNG_UNLOCK( ) ReleaseMutex(hSpcBCPRNGMutex) #endif static void spc_increment_counter(SPC_BCPRNG_CTX *prng) { int i = SPC_BLOCK_SZ; while (i--) if (++prng->ctr[i]) return; } void spc_bcprng_init(SPC_BCPRNG_CTX *prng, unsigned char *key, int kl, unsigned char *x, int xl) { int i = 0; SPC_BCPRNG_LOCK( ); SPC_ENCRYPT_INIT(&(prng->ks), key, kl); memset(prng->ctr, 0, SPC_BLOCK_SZ); while (xl-- && i < SPC_BLOCK_SZ) prng->ctr[i++] = *x++; prng->ix = 0; prng->kl = kl; SPC_BCPRNG_UNLOCK( ); } unsigned char *spc_bcprng_rand(SPC_BCPRNG_CTX *prng, unsigned char *buf, size_t l) { unsigned char *p; SPC_BCPRNG_LOCK( ); for (p = buf; prng->ix && l; l--) { *p++ = prng->lo[prng->ix++]; prng->ix %= SPC_BLOCK_SZ; } while (l >= SPC_BLOCK_SZ) { SPC_DO_ENCRYPT(&(prng->ks), prng->ctr, p); spc_increment_counter(prng); p += SPC_BLOCK_SZ; l -= SPC_BLOCK_SZ; } if (l) { SPC_DO_ENCRYPT(&(prng->ks), prng->ctr, prng->lo); spc_increment_counter(prng); prng->ix = l; while (l--) p[l] = prng->lo[l]; } SPC_BCPRNG_UNLOCK( ); return buf; } If your block cipher has 64-bit blocks and has no practical weaknesses, do not use this generator for more than 235 bytes of output (232 block cipher calls). If the cipher has 128-bit blocks, do not exceed 268 bytes of output (264 block cipher calls). If using a 128-bit block cipher, it is generally acceptable not to check for this condition, as you generally would not reasonably expect to ever use that many bytes of output. To bind this cryptographic PRNG to the API in Recipe 11.2, you can use a single global generator context that you seed in spc_rand_init( ), requiring you to get a secure seed. Once that you should probably be sure to check that the generator is seeded before calling spc_bcprng_rand( ). As we mentioned, stream ciphers are themselves pseudo-random number generators, where the key (and the initialization vector, if appropriate) constitutes the seed. If you are planning to use such a cipher, we strongly recommend the SNOW 2.0 cipher, discussed in Recipe 5.2. Because of the popularity of the RC4 cipher, we expect that people will prefer to use RC4, even though it does not look as good as SNOW. The RC4 stream cipher does make an acceptable pseudo-random number generator, and it is incredibly fast if you do not rekey frequently (that is particularly useful if you expect to need a heck of a lot of numbers). If you do rekey frequently to avoid backtracking attacks, a block cipher-based approach may be faster; time it to make sure. RC4 requires a little bit of work to use properly, given a standard API. First, most APIs want you to pass in data to encrypt. Because you want only the raw keystream, you must always pass in zeros. Second, be sure to use RC4 in a secure manner, as discussed in Recipe 5.23. If your RC4 implementation has the API discussed in Recipe 5.23, seeding it as a pseudo-random number generator is the same as keying the algorithm. RC4 can accept keys up to 256 bytes in length. After encrypting 256 bytes and throwing the results away, you can then, given an RC4 context, get random data by encrypting zeros. Assuming the RC4 API from Recipe 5.23 and assuming you have a context statically allocated in a global variable named spc_prng, here's a binding of RC4 to the spc_rand( ) function that we introduced in Recipe 11.2: /* NOTE: This code should be augmented to reseed after each request /* for pseudo-random data, as discussed in Recipe 11.6 /* #ifndef WIN32 #include <pthread.h> static pthread_mutex_t spc_rc4rng_mutex = PTHREAD_MUTEX_INITIALIZER; #define SPC_RC4RNG_LOCK( ) pthread_mutex_lock(&spc_rc4rng_mutex) #define SPC_RC4RNG_UNLOCK( ) pthread_mutex_unlock(&spc_rc4rng_mutex) #else #include <windows.h> static HANDLE hSpcRC4RNGMutex; #define SPC_RC4RNG_LOCK( ) WaitForSingleObject(hSpcRC4RNGMutex, INFINITE) #define SPC_RC4RNG_UNLOCK( ) ReleaseMutex(hSpcRC4RNGMutex) #endif #define SPC_ARBITRARY_SIZE 16 unsigned char *spc_rand(unsigned char *buf, size_t l) { static unsigned char zeros[SPC_ARBITRARY_SIZE] = {0,}; unsigned char *p = buf; #ifdef WIN32 if (!hSpcRC4RNGMutex) hSpcRC4RNGMutex = CreateMutex(0, FALSE, 0); #endif SPC_RC4RNG_LOCK( ); while (l >= SPC_ARBITRARY_SIZE) { RC4(&spc_prng, SPC_ARBITRARY_SIZE, zeros, p); l -= SPC_ARBITRARY_SIZE; p += SPC_ARBITRARY_SIZE; } if (l) RC4(&spc_prng, l, zeros, p); SPC_RC4RNG_UNLOCK( ); return buf; } Note that, although we don't show it in this code, you should ensure that the generator is initialized before giving output. Because using this RC4 API requires encrypting zero bytes to get the keystream output, in order to be able to generate data of arbitrary sizes, you must either dynamically allocate and zero out memory every time or iteratively call RC4 in chunks of up to a fixed size using a static buffer filled with zeros. We opt for the latter approach. The most common mistake made when trying to use a hash function as a cryptographic pseudo-random number generator is to continually hash a piece of data. Such an approach gives away the generator's internal state with every output. For example, suppose that your internal state is some value X, and you generate and output Y by hashing X. The next time you need random data, rehashing X will give the same results, and any attacker who knows the last outputs from the generator can figure out the next outputs if you generate them by hashing Y. One very safe way to use a cryptographic hash function in a cryptographic pseudo-random number generator is to use HMAC in counter mode, as discussed in Recipe 6.10. Here we implement a generator based on the HMAC-SHA1 implementation from Recipe 6.10. You should be able to adapt this code easily to any HMAC implementation you want to use. /* NOTE: This code should be augmented to reseed after each request /* for pseudo-random data, as discussed in Recipe 11.6 /* #ifndef WIN32 #include <string.h> #include <pthread.h> #else #include <windows.h> #endif /* If MAC operations fail, you passed in a bad key size or you are using a hardware * API that failed. In that case, be sure to perform error checking. */ #define MAC_OUT_SZ 20 typedef struct { SPC_HMAC_CTX ctx; unsigned char ctr[MAC_OUT_SZ]; unsigned char lo[MAC_OUT_SZ]; /* Leftover block of output */ int ix; /* index into lo. */ } SPC_MPRNG_CTX; #ifndef WIN32 static pthread_mutex_t spc_mprng_mutex = PTHREAD_MUTEX_INITIALIZER; #define SPC_MPRNG_LOCK( ) pthread_mutex_lock(&spc_mprng_mutex) #define SPC_MPRNG_UNLOCK( ) pthread_mutex_unlock(&spc_mprng_mutex) #else static HANDLE hSpcMPRNGMutex; #define SPC_MPRNG_LOCK( ) WaitForSingleObject(hSpcMPRNGMutex, INFINITE) #define SPC_MPRNG_UNLOCK( ) ReleaseMutex(hSpcMPRNGMutex) #endif static void spc_increment_mcounter(SPC_MPRNG_CTX *prng) { int i = MAC_OUT_SZ; while (i--) if (++prng->ctr[i]) return; } void spc_mprng_init(SPC_MPRNG_CTX *prng, unsigned char *seed, int l) { SPC_MPRNG_LOCK( ); SPC_HMAC_Init(&(prng->ctx), seed, l); memset(prng->ctr, 0, MAC_OUT_SZ); prng->ix = 0; SPC_MPRNG_UNLOCK( ); } unsigned char *spc_mprng_rand(SPC_MPRNG_CTX *prng, unsigned char *buf, size_t l) { unsigned char *p; SPC_MPRNG_LOCK( ); for (p = buf; prng->ix && l; l--) { *p++ = prng->lo[prng->ix++]; prng->ix %= MAC_OUT_SZ; } while (l >= MAC_OUT_SZ) { SPC_HMAC_Reset(&(prng->ctx)); SPC_HMAC_Update(&(prng->ctx), prng->ctr, sizeof(prng->ctr)); SPC_HMAC_Final(p, &(prng->ctx)); spc_increment_mcounter(prng); p += MAC_OUT_SZ; l -= MAC_OUT_SZ; } if (l) { SPC_HMAC_Reset(&(prng->ctx)); SPC_HMAC_Update(&(prng->ctx), prng->ctr, sizeof(prng->ctr)); SPC_HMAC_Final(prng->lo, &(prng->ctx)); spc_increment_mcounter(prng); prng->ix = l; while (l--) p[l] = prng->lo[l]; } SPC_MPRNG_UNLOCK( ); return buf; } This implementation has two publicly exported functions. The first initializes the generator: void spc_mprng_init(SPC_MPRNG_CTX *prng, unsigned char *seed, int l); This function has the following arguments: Context object used to hold the state for a MAC-based PRNG. Buffer containing data that should be filled with entropy (the seed). This data is used to key the MAC. Length of the seed buffer in bytes. The second function actually produces random data: unsigned char *spc_mprng_rand(SPC_MPRNG_CTX *prng, unsigned char *buf, size_t l); This function has the following arguments: Context object used to hold the state for a MAC-based PRNG. Buffer into which the random data will be placed. Number of random bytes to be placed into the output buffer. If your hash function produces n-bit outputs and has no practical weaknesses, do not use the generator after you run the MAC more than 2n/2 times. For example, with SHA1, this generator should be not be a problem for at least 280 x 20 bytes. In practice, you probably will not have to worry about this issue. To bind this cryptographic pseudo-random number generator to the API in Recipe 11.2, you can use a single global generator context that you seed in spc_rand_init( ), requiring you to get a secure seed. Once that, although we don't show it in the previous code, you should ensure that the generator is initialized before giving output. Recipe 5.2, Recipe 5.5, Recipe 5.9, Recipe 5.23, Recipe 6.10, Recipe 11.2, Recipe 11.6, Recipe 11.8, Recipe 11.16
http://etutorials.org/Programming/secure+programming/Chapter+11.+Random+Numbers/11.5+Using+an+Application-Level+Generator/
CC-MAIN-2017-22
refinedweb
2,955
52.09