text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
- other intelligent logger Being experienced users of other loggers in the market, we realized that one of the most annoying things is to forget to turn it off once you get to destination. This means consuming memory because the device continuously saves almost the same position. For this reason we decided to implement a system that is able to stop recording when the logger is still. This feature was obtained thanks to an accelerometer which, in addition to stop saving data when the tracker has stopped, also allows to turn off the GPS module to reduce consumption, which is useful, since this kind of systems normally feeds on batteries. Hardware As said, the logger consists of a shield, on which there are mainly the GPS receiver, the SD-Card reader and the accelerometer, in addition to an Arduino UNO board. The GPS receiver is connected through two connectors (each one alternative to the other) then you can actually place it out of the shield. This can be useful if you plan to mount more shields above the circuit, which inevitably disturb the reception of satellites signals. Let’s look at the circuit of the shield, which, by virtue of its reduced power consumption, it is powered directly from the Arduino. Our measures, when powering it with the Arduino 5V, indicate a consumption of around 35 mA, with the GPS on, reduced to about 5 mA by turning it off via pin 7. Moreover, we have the 40 mA of the Arduino ONE, for a total of 75 mA in LOG mode (ie during the acquisition and recording of coordinates) and 45 mA in standby mode. The main part of the shield is obviously composed of the GlobalSat GPS EM406 module, with a built-in antenna (on top of it) which communicates with the Arduino via the NMEA 0183 protocol. Communication is serial (4800 bps) and uses the TX and RX lines at TTL level. This signal is available on the serial and GPS connector through GPSRX and GPSTX jumpers and can be routed to pins 0 and 1 (RX and TX of the integrated serial port) of the Arduino. Alternatively you can use pins 5 and 6. In this regard it should be noted that lines 0 and 1 constitute Arduino’s physical UART: if this is not available because it is already used by other applications (for example because you mounted another shield), you can, via the SoftwareSerial library, set your board to handle 5 and 6 lines as a virtual UART. The GPS module is activated by setting pin 7 to logic low: from that moment it will cyclically transmit NMEA strings on the serial port until it will be disabled by setting it to high. As for the microSD slot, the shield includes 74HC4050D level converter to allow communication via the SPI at 0/3, 3V levels (in fact the SD card and micro SD use the Serial Peripheral Interface bus for external communications). The signals are connected to the ICSP header so as to allow compatibility with the Arduino MEGA. You can access a FAT16 or FAT32 formatted microSD, by using the SD Arduino library available in the development environment, taking care of using SD.begin(4) to configure the Chip Select different from the standard. The accelerometer, dubbed ACC in the wiring diagram: it’s the MMA7361, a three axes one produced by Freescale: for this we use the version produced by Sparkfun, already mounted on a board showing 9-pins single-in-line with a 2.54mm pitch. This module is powered with 3.3V by the Arduino board by using the the 3.3V line and GND, which on the MMA7361 reach the pins, respectively, VCC and GND. The accelerometer outputs a triple analog signal concerning the detected acceleration on each of the axes, and the signals are available at pin 4 for the X axis, 3 for the Y axis and 2 for the Z axis. By the JREF jumper is possible to connect Arduino’s VREF to 3.3 V, so that the 1.65 V accelerometer output, that corresponds to no acceleration (0g), roughly coincides with the 512 value at the output of the A/D 10-bit Arduino converter, ie in the middle of the excursion (the A / D ranges from 0 to 1024). JGSEL jumper allows you to select one of two modes of operation of the accelerometer: when connected to GND it will have a full scale of 1.5 g and a resolution of 800 mV/g, corresponding to a analogRead value of about 248 for g; when connected to VCC the full scale goes to 6 g and the resolution to 206 mV/g, corresponding to sampling about 64 per g. The default configuration of our logger provides that the central jumpers of the shield, ie GPSRX and GPSTX, are always closed on right pins (ie, D5 and D6, respectively), while the JGSEL jumper related to the accelerometer, is closed to GND. In addition JREF jumper must be closed as well. The firmware For the sake of simplicity, we decided to save NMEA strings received on the memory of the GPS module; this also helps saving space compared to most popular formats such as GPX and KML, which however can be obtained from raw data through special programs. SoftwareSerial will help us handle the communication with the GPS module and the board,if the microSD card is not present, the GPS data will be sent on the Arduino serial, so it can be directly sent to a computer for debugging and advanced applications. The accelerometer controls the movement approximately every second, if a movement is perceived (in any direction) the logger stays active or it’s activated. On the other hand if the movements stay below a certain threshold (M_THRESH) for one minute (the time period is configurable via the STOP value, in milliseconds) the GPS is turned off. The values in the program sketch have been empirically checked to provide acceptable results in typical usage scenarios. For your convenience, when we stop to save the data in the debug phase, we write a a proprietary PXXXX string in the NMEA file, this will be properly ignored by programs. The same string is used to report any SoftwareSerial overflow, which could be a sign of problems. The SD file (data.log in our example) is frequently closed, so that at any time it is possible to cut off the power, move the microSD into a card reader get the tracking done. With an easy customization, the logger could integrate some environmental sensors and add information to our travel log as to obtain geo-referenced data. In future posts we will discuss how to interpret the NMEA Arduino sentences so that you can create more complex projects than just saving data. #include <SD.h> #include <SoftwareSerial.h> /* ****** Settings ****** */ char log_filename[13] = "data.log"; // keep this FAT friendly // how long should we wait when not moving before we stop logging #define STOP 60000 /* ****** End of user settings ****** */ #define PIN_GPS_ENABLE 7 SoftwareSerial gpsSerial(5,6); #define PIN_SD_SS 10 File log_file; boolean sd_available = false; boolean moving = true; unsigned long last_move = 0; #define ZERO_X 512 #define ZERO_Y 512 #define ZERO_Z 512 #define M_THRESH 60000 #define PIN_X 4 #define PIN_Y 3 #define PIN_Z 2 void setup() { Serial.begin(9600); //GPS setup gpsSerial.begin(4800); pinMode(PIN_GPS_ENABLE,OUTPUT); digitalWrite(PIN_GPS_ENABLE,HIGH); //SD setup pinMode(PIN_SD_SS,OUTPUT); pinMode(10,OUTPUT); start_sd(); start_gps(); } void loop() { if(moving) { if(sd_available) { log_file = SD.open(log_filename,FILE_WRITE); if(!log_file) { sd_available = false; } } if(gpsSerial.overflow()) { if(sd_available) { log_file.write("\r\n$PXXXX,SoftwareSerial overflow!!!"); } else { Serial.println("\r\n$PXXXX,SoftwareSerial overflow!!!"); } } while(gpsSerial.available()) { int r = gpsSerial.read(); if(sd_available) { log_file.write(r); } else { Serial.write(r); } } check_movement(); log_file.close(); } else { delay(1000); check_movement(); } } void start_gps() { gpsSerial.listen(); digitalWrite(PIN_GPS_ENABLE,LOW); } void stop_gps() { digitalWrite(PIN_GPS_ENABLE,HIGH); } void start_sd() { if (!SD.begin(PIN_SD_SS)) { Serial.println("Card failed, or not present"); } else { log_file = SD.open(log_filename,FILE_WRITE); if (log_file) { log_file.close(); sd_available = true; } else { Serial.print("Can't open file <"); Serial.print(log_filename); Serial.println("> for writing."); } } } void check_movement() { int x = analogRead(PIN_X) - ZERO_X; int y = analogRead(PIN_Y) - ZERO_Y; int z = analogRead(PIN_Z) - ZERO_Z; long acc = (long)x*x + (long)y*y + (long)z*z; if (acc > M_THRESH) { last_move = millis(); if (! moving) { start_gps(); start_sd(); // Ignore data that has overflowed gpsSerial.overflow(); } moving = true; } else { if (moving) { unsigned long now = millis(); // TODO: check for the overflow condition of millis // (or reboot the arduino every 49 days :) ) if (now-last_move > STOP) { moving = false; stop_gps(); if(sd_available) { log_file.write("\r\n$PXXXX,Device is idle, stop logging\r\n"); } else { Serial.println("\r\n$PXXXX,Device is idle, stop logging\r\n"); } } } } } Manage files NMEA NMEA files saved in our logger can be directly read by a limited number of programs. Fortunately, GPSBabel, a free program, allows you to convert to and from most of the formats used by GPS systems. GPSBabel for Windows and OS X can be downloaded from the official website (). Linux users can find it in the repository of their distribution. Using it is simple: just select the input format (in our case NMEA), the name of the file to be read (data.log on the microSD), the format to be generated (eg GPX for most of the programs or KML for Google Earth) and the name of the output file. GPSBabel offers innumerable options to complete the conversion, but the defaults are generally suitable for most uses. Another program that can be useful is GpsPrune, this free and multi-platform (Java), program enables you to convert and manipulate GPS tracks using as background the Free maps from OpenStreetMap project, and directly supports the NMEA format. GpsPrune can be downloaded from the official website and and is as well available for most of the Linux distributions via the repositories. Note that to open the files you will need to rename them with .nmea extension. In the Store Pingback: An Arduino powerer, easily extendable GPS Datal...() Pingback: An Arduino powered, easily extendable GPS Datal...() Pingback: An Arduino powerer, easily extendable GPS Datal...() Pingback: An Arduino powered, easily extendable GPS Datalogger | ElectronicsEU.com()
http://www.open-electronics.org/an-arduino-powerer-easily-extendable-gps-datalogger/
CC-MAIN-2017-13
refinedweb
1,703
51.18
derelict-alure 2.0.0-beta.1 A dynamic binding to the ALURE library. To use this package, run the following command in your project's root directory: DerelictALURE A dynamic binding to version 1.2 of the ALURE (AL Utilities REtooled) library for the D Programming Language. Please see the sections on Compiling and Linking and The Derelict Loader, in the Derelict documentation, for information on how to build DerelictGLFW3 and load GLFW3 at run time. In the meantime, here's some sample code. import derelict.alure.alure; void main() { // Load the ALURE library. DerelictALURE.load(); // Now ALURE functions can be called. ... } - Registered by Mike Parker - 2.0.0-beta.1 released 3 years ago - DerelictOrg/DerelictALURE - github.com/DerelictOrg/DerelictALURE - Boost - Authors: - - Dependencies: - derelict-al - Versions: - Show all 8 versions - Download Stats: 0 downloads today 0 downloads this week 0 downloads this month 1340 downloads total - Score: - 0.5 - Short URL: - derelict-alure.dub.pm
https://code.dlang.org/packages/derelict-alure
CC-MAIN-2020-16
refinedweb
157
59.19
It is possible to receive the following error when creating a new WebAii test in a solution that does not already have one, or by opening/modifying a solution that already has a WebAii test in it: This error is usually seen in the VS plugin, and is caused if a Windows User account that is not an administrator attempts to use Test Studio. This error can occur in the VS plugin even when you are using an Administrative user. This type of issue is caused by the computer you are using not assigning the rights to the namespace you are creating (within Visual Studio) for the project to the non-admin account. These permissions are reserved for the Sys-Admin (the "root" user) and must be dynamically assigned if another user will be utilizing the namespace. This information is also covered by Microsoft at the link listed in the error, but we will cover it as well below. Windows Server 2003 users can skip to the next section (using httpcfg.exe), but Windows XP users will need to install the necessary support tools to do this. This is a free download from Microsoft, and is a manual download (meaning it is not included in other software updates; it is "optional"). After you have the tool downloaded, run the installer for the software and proceed to Adding a Namespace Reservation httpcfg set urlacl /u {} /a ACL Once we get the information from 4a and 4b we can review your Namespace Reservation string and help modify it as necessary. Windows Vista and 7 users will have it easier, as you can use the already installed netsh to add a Namespace Reservation. To do this: netsh http add urlacl url= user=DOMAIN\user If you go through the above netsh process and find that, after you quit and re-open Visual Studio (or restart the computer), you are given a new namespace URL (ex. first error provided url, new error provides or any other new url) in the same error message. We have found this symptom to be caused by having UAC on and active for an account. To fix this symptom, turn off UAC by doing the following: If you were already at never notify, you will need to do the following: After you complete the above, you should not have the issue with a different Namespace error each time the project is launched.
http://docs.telerik.com/teststudio/user-guide/troubleshooting_guide/recording-problems/namespace-reservation.aspx
CC-MAIN-2014-10
refinedweb
403
51.21
This article is more than one year old. Older articles may contain outdated content. Check that the information in the page has not become incorrect since its publication. Kong Ingress Controller and Service Mesh: Setting up Ingress to Istio on Kubernetes Author: Kevin Chen, Kong Kubernetes has become the de facto way to orchestrate containers and the services within services. But how do we give services outside our cluster access to what is within? Kubernetes comes with the Ingress API object that manages external access to services within a cluster. Ingress is a group of rules that will proxy inbound connections to endpoints defined by a backend. However, Kubernetes does not know what to do with Ingress resources without an Ingress controller, which is where an open source controller can come into play. In this post, we are going to use one option for this: the Kong Ingress Controller. The Kong Ingress Controller was open-sourced a year ago and recently reached one million downloads. In the recent 0.7 release, service mesh support was also added. Other features of this release include: - Built-In Kubernetes Admission Controller, which validates Custom Resource Definitions (CRD) as they are created or updated and rejects any invalid configurations. - In-memory Mode - Each pod’s controller actively configures the Kong container in its pod, which limits the blast radius of failure of a single container of Kong or controller container to that pod only. - Native gRPC Routing - gRPC traffic can now be routed via Kong Ingress Controller natively with support for method-based routing. If you would like a deeper dive into Kong Ingress Controller 0.7, please check out the GitHub repository. But let’s get back to the service mesh support since that will be the main focal point of this blog post. Service mesh allows organizations to address microservices challenges related to security, reliability, and observability by abstracting inter-service communication into a mesh layer. But what if our mesh layer sits within Kubernetes and we still need to expose certain services beyond our cluster? Then you need an Ingress controller such as the Kong Ingress Controller. In this blog post, we’ll cover how to deploy Kong Ingress Controller as your Ingress layer to an Istio mesh. Let’s dive right in: Part 0: Set up Istio on Kubernetes This blog will assume you have Istio set up on Kubernetes. If you need to catch up to this point, please check out the Istio documentation. It will walk you through setting up Istio on Kubernetes. 1. Install the Bookinfo Application First, we need to label the namespaces that will host our application and Kong proxy. To label our default namespace where the bookinfo app sits, run this command: $ kubectl label namespace default istio-injection=enabled namespace/default labeled Then create a new namespace that will be hosting our Kong gateway and the Ingress controller: $ kubectl create namespace kong namespace/kong created Because Kong will be sitting outside the default namespace, be sure you also label the Kong namespace with istio-injection enabled as well: $ kubectl label namespace kong istio-injection=enabled namespace/kong labeled Having both namespaces labeled istio-injection=enabled is necessary. Or else the default configuration will not inject a sidecar container into the pods of your namespaces. Now deploy your BookInfo application with the following command: $ kubectl apply -f Let’s double-check our Services and Pods to make sure that we have it all set up correctly: $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 10.97.125.254 <none> 9080/TCP 29s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29h productpage ClusterIP 10.97.62.68 <none> 9080/TCP 28s ratings ClusterIP 10.96.15.180 <none> 9080/TCP 28s reviews ClusterIP 10.104.207.136 <none> 9080/TCP 28s You should see four new services: details, productpage, ratings, and reviews. None of them have an external IP so we will use the Kong gateway to expose the necessary services. And to check pods, run the following command: $ kubectl get pods NAME READY STATUS RESTARTS AGE details-v1-c5b5f496d-9wm29 2/2 Running 0 101s productpage-v1-7d6cfb7dfd-5mc96 2/2 Running 0 100s ratings-v1-f745cf57b-hmkwf 2/2 Running 0 101s reviews-v1-85c474d9b8-kqcpt 2/2 Running 0 101s reviews-v2-ccffdd984-9jnsj 2/2 Running 0 101s reviews-v3-98dc67b68-nzw97 2/2 Running 0 101s This command outputs useful data, so let’s take a second to understand it. If you examine the READY column, each pod has two containers running: the service and an Envoy sidecar injected alongside it. Another thing to highlight is that there are three review pods but only 1 review service. The Envoy sidecar will load balance the traffic to three different review pods that contain different versions, giving us the ability to A/B test our changes. We have one step before we can access the deployed application. We need to add an additional annotation to the productpage service. To do so, run: $ kubectl annotate service productpage ingress.kubernetes.io/service-upstream=true service/productpage annotated Both the API gateway (Kong) and the service mesh (Istio) can handle the load-balancing. Without the additional ingress.kubernetes.io/service-upstream: "true" annotation, Kong will try to load-balance by selecting its own endpoint/target from the productpage service. This causes Envoy to receive that pod’s IP as the upstream local address, instead of the service’s cluster IP. But we want the service's cluster IP so that Envoy can properly load balance. With that added, you should now be able to access your product page! $ kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>" <title>Simple Bookstore App</title> 2. Kong Kubernetes Ingress Controller Without Database To expose your services to the world, we will deploy Kong as the north-south traffic gateway. Kong 1.1 released with declarative configuration and DB-less mode. Declarative configuration allows you to specify the desired system state through a YAML or JSON file instead of a sequence of API calls. Using declarative config provides several key benefits to reduce complexity, increase automation and enhance system performance. And with the Kong Ingress Controller, any Ingress rules you apply to the cluster will automatically be configured on the Kong proxy. Let’s set up the Kong Ingress Controller and the actual Kong proxy first like this: $ kubectl apply -f namespace/kong configured To check if the Kong pod is up and running, run: $ kubectl get pods -n kong NAME READY STATUS RESTARTS AGE pod/ingress-kong-8b44c9856-9s42v 3/3 Running 0 2m26s There will be three containers within this pod. The first container is the Kong Gateway that will be the Ingress point to your cluster. The second container is the Ingress controller. It uses Ingress resources and updates the proxy to follow rules defined in the resource. And lastly, the third container is the Envoy proxy injected by Istio. Kong will route traffic through the Envoy sidecar proxy to the appropriate service. To send requests into the cluster via our newly deployed Kong Gateway, setup an environment variable with the a URL based on the IP address at which Kong is accessible. $ export PROXY_URL="$(minikube service -n kong kong-proxy --url | head -1)" $ echo $PROXY_URL Next, we need to change some configuration so that the side-car Envoy process can route the request correctly based on the host/authority header of the request. Run the following to stop the route from preserving host: $ echo " apiVersion: configuration.konghq.com/v1 kind: KongIngress metadata: name: do-not-preserve-host route: preserve_host: false upstream: host_header: productpage.default.svc " | kubectl apply -f - kongingress.configuration.konghq.com/do-not-preserve-host created And annotate the existing productpage service to set service-upstream as true: $ kubectl annotate svc productpage Ingress.kubernetes.io/service-upstream="true" service/productpage annotated Now that we have everything set up, we can look at how to use the Ingress resource to help route external traffic to the services within your Istio mesh. We’ll create an Ingress rule that routes all traffic with the path of /to our productpage service: $ echo " apiVersion: extensions/v1beta1 kind: Ingress metadata: name: productpage annotations: configuration.konghq.com: do-not-preserve-host spec: rules: - paths: - path: / backend: serviceName: productpage servicePort: 9080 " | kubectl apply -f - ingress.extensions/productpage created And just like that, the Kong Ingress Controller is able to understand the rules you defined in the Ingress resource and routes it to the productpage service! To view the product page service’s GUI, go to $PROXY_URL/productpage in your browser. Or to test it in your command line, try: $ curl $PROXY_URL/productpage That is all I have for this walk-through. If you enjoyed the technologies used in this post, please check out their repositories since they are all open source and would love to have more contributors! Here are their links for your convenience: - Kong: [GitHub] [Twitter] - Kubernetes: [GitHub] [Twitter] - Istio: [GitHub] [Twitter] - Envoy: [GitHub] [Twitter] Thank you for following along!
https://kubernetes.io/blog/2020/03/18/kong-ingress-controller-and-istio-service-mesh/
CC-MAIN-2022-21
refinedweb
1,534
52.9
I am currently trying to take ten different text files (file2_0.txt, file2_1.txt,file2_2.txt,…) all containing one column and one hundred million rows of random integers and add the text files row by row. I want to add every row from all ten files together and generate a new text file (total_file.txt) with the sum of each row. Below is an example of what I am trying to do using two of the files added together to create total_file.txt. file2_0.txt 5 19 51 10 756 file2_1.txt 11 43 845 43 156 total_file.txt 16 62 896 53 912 Since these files are rather large I am not trying to read them into memory and instead use concurrency. I found sample code from another StackOverflow (Python : Sum of numbers in different files) question that I was trying before doing all of the files at one time. The problem I am having is that the output (total_file.txt) only contains the numbers from the second text file (file2_1.txt) with nothing added. I am not sure why this is. I am new to Stackoverflow and coding in general and wanted to ask about this on the linked post, however, I read online that is not good practice. Below is the code I worked on. import shutil #Files to add filenames = ['file2_0.txt', 'file2_1.txt']` sums = [] with open('file2_0.txt') as file: for row in file: sums.append(row.split()) #Create output file with open('total_file.txt', 'wb') as wfd: for file in filenames: with open(file) as open_file: for i, row in enumerate(open_file): sums[i] = sums[i]+row.split() with open(file, 'rb') as fd: shutil.copyfileobj(fd, wfd) Just for background, I am working with these large files to test processing speeds. Once I get an understanding of what I am doing wrong I will be working on parallel processing specifically multithreading to test the various process speeds. Please let me know what further information you might need from me. Answer I’d use generators so you don’t have to load all of the files into memory at once (in case they’re large) Then just pull the next value from each generator, sum them, write them and carry on. When you hit the end of the file you’ll get a StopIteration exception and be done def read_file(file): with open(file, "r") as inFile: for row in inFile: yield row file_list = ["file1.txt", "file2.txt", ..., "file10.txt"] file_generators = [read_file(path) for path in file_list] with open("totals.txt", "w+") as outFile: while True try: outFile.write(f"{sum([int(next(gen)) for gen in file_generators])}n") except StopIteration: break
https://www.tutorialguruji.com/python/how-can-i-sum-integers-from-multiple-text-files-into-a-new-text-file-using-python/
CC-MAIN-2021-43
refinedweb
448
74.39
I happen to find that torch.jit.trace() will call forward() three times, run the code below: import torch class MyCell(torch.nn.Module): def __init__(self): super(MyCell, self).__init__() self.linear = torch.nn.Linear(4, 4) def forward(self, x, h): print("execute forward") new_h = torch.tanh(self.linear(x) + h) return new_h, new_h my_cell = MyCell() x = torch.rand(3, 4) h = torch.rand(3, 4) traced_cell = torch.jit.trace(my_cell, (x, h)) The output is execute forward execute forward execute forward I also test other model, they all show call forward() three times, so why torch.jit.trace() need call forward() three times, and what is purpose of each time ? Thanks !
https://discuss.pytorch.org/t/torch-jit-trace-call-forward-three-times/146145
CC-MAIN-2022-33
refinedweb
114
61.63
Author: Jeffrey Sambells with Aaron Gustafson Publisher: Friends of ED, 2007Pages: 570ISBN: 978-1590598566Aimed at: Web developersRating: 5Pros: Detailed, standards-based approachCons: Not a lotReviewed by: Dave WheelerI liked this book. A lot.Writing good client-side JavaScript is non-trivial, and Sambells takes the trouble to explain how to do it in detail. The book starts quickly, leaping in to subjects such as closures, namespaces and creating your own object types. The concept of script libraries, and thus how you should structure code, is presented early on and reinforced thereafter. Throughout the book you get the feeling that you are building on what has been presented before.As you would expect, topics such as event handling and DOM programming are examined in detail. And although this is not a book on AJAX, the relatively short chapter on AJAX covers the topic thoroughly and in detail.The book is technically very detailed, although comfortingly the explanations are clear and well written. Non-programmers might struggle a little, but anyone with basic coding skills should be able to follow along easily enough.The book offers sensible and practical guidance on how and where scripting can add real value to a Web application. I particularly enjoyed the balance offered by the explanatory case studies to help alleviate what might otherwise have been very dry material; they added context to the code. Throughout, the book takes a no-fuss approach to writing browser-neutral script and to handling the no-script scenario gracefully.The one nagging question in my mind is whether you actually need this book in the face of the many abstraction libraries that exist today? For example, many ASP.NET developers will be happy utilising pre-canned scripting provided by features such as the validation controls, or with using the UpdatePanel to add AJAX functionality to their Web sites. However, if you want to really know how to crank out client-side script, then I would highly recommend this book.
http://i-programmer.info/bookreviews/12-web-design-and-development-/125-advanced-dom-scripting.html
CC-MAIN-2017-09
refinedweb
330
51.58
Welcome keeper that blocks access to another Object. I demonstrate how the proxy pattern works using some code used in my State Design Pattern Tutorial. You may want to check that tutorial out before proceeding. If you like videos like this, it helps to tell Google by clicking here Share it if you like Code from the Video ATMMACHINE.JAVA public class ATMMachine implements GetATMData{; } // NEW STUFF public ATMState getATMState() { return atmState; } public int getCashInMachine() { return cashInMachine; } } GETATMDATA.JAVA // This interface will contain just those methods // that you want the proxy to provide access to public interface GetATMData { public ATMState getATMState(); public int getCashInMachine(); } ATMPROXY.JAVA // In this situation the proxy both creates and destroys // an ATMMachine Object public class ATMProxy implements GetATMData { // Allows the user to access getATMState in the // Object ATMMachine public ATMState getATMState() { ATMMachine realATMMachine = new ATMMachine(); return realATMMachine.getATMState(); } // Allows the user to access getCashInMachine // in the Object ATMMachine public int getCashInMachine() { ATMMachine realATMMachine = new ATMMachine(); return realATMMachine.getCashInMachine(); } }); // NEW STUFF : Proxy Design Pattern Code // The interface limits access to just the methods you want // made accessible GetATMData realATMMachine = new ATMMachine(); GetATMData atmProxy = new ATMProxy(); System.out.println("\nCurrent ATM State " + atmProxy.getATMState()); System.out.println("\nCash in ATM Machine $" + atmProxy.getCashInMachine()); // The user can't perform this action because ATMProxy doesn't // have access to that potentially harmful method // atmProxy.setCashInMachine(10000); } } ATMState.java public interface ATMState { void insertCard(); void ejectCard(); void insertPin(int pinEntered); void requestCash(int cashToWithdraw); } The best place on the internet that I found to teach you Design Patterns. Thank you very much! You’re very welcome 🙂 I thought this topic needed some attention Hi, Thanks for providing the best ever tutorial on design patterns. Do you have something in Java concurrency also? Regards. ashg You’re very welcome 🙂 Sorry, but I haven’t covered concurrency yet, but I’ll definitely cover it as soon as possible Excellent site to learn Design Pattern.THanks a ton for your effort. Request you to kindly add some tutorials for Java Concurrency. Thanks Thank you 🙂 I will definitely cover that topic. I won’t stop making java tutorials until I cover everything Thanks for ur reply. these video lectures helped me a lot. thanx sir 🙂 You are very welcome 🙂 Nice tutorials, but you forget to include “ATMState.java” code. Thank you 🙂 I updated the page and added the code. thank you for telling me about that Hi Derek, The code you provided here is incomplete. There are a lot of staff missing here, including some classes from your “State Design Pattern” code, where some classes should be refactored to suit “Proxy Design Pattern” needs. Thanks Just update, classes from “state dp” should be included, but not refactored. It’s “ATMMachine.java” class is incomplete Hello sir, I have seen almost all of your videos about design patterns. This help me a lot in polishing my java skills. And yes Of’cousre You have the Most innovative method of teaching. God Bless You. Keep Rocking !! Thank you very much 🙂 I did my best to teach in a new way and I’m glad that it seems to help some people. May God bless you as well. Your videos are simply awesome. Keep it up Thank you very much 🙂 Very well explained. Is there any way we can get access to the slide and UML diagram shown at the beginning of the video? This goes for all videos on design patterns. Thank you 🙂 I have every presentation available as a big PDF. It is a bit messy because I never meant to make it available to the public, but here is the link Thank you so much for sharing the link. Really appreciate it. You’re very welcome 🙂 Hi Derek, I have a confusion. Does the methods in Proxy not return initial states of a fresh object all the time? Since, it is freshly allocated there and returned. Regards Abhinav Hi Abhinav, Yes you are correct. I dramatically simplified the ATM part here. I’m assuming that it would actually be connected to a database and that the ATM object would just provide already created and stored data. I should have made it clear in the tutorial that I wasn’t fully developing the ATM because I thought that would distract from the pattern. Sorry about that Awesome example and great explanation. Hats off to you for such nice tutorials on design patterns. Thank you so much. Please keep up the good work. God bless you. Thank you very much 🙂 May God bless you and your loved ones as well.
https://www.newthinktank.com/2012/10/proxy-design-pattern-tutorial/
CC-MAIN-2018-09
refinedweb
764
57.47
Package: Severity: wishlist [ full quote of the -project post at for the bug report records ] On Wed, Jun 27, 2012 at 02:22:00AM +0100, Stuart Prescott wrote: > -----BEGIN? I wouldn't worry too much about "underwhelming" feedback, maybe people were just busy, not around or, legitimately, have different views on how to implement this. But this is also why it is important to document the state of this in a more stable place, and why I'm now submitting a bug. I think we should have an official place that both documents what the *.debian.net namespace is for (from the user POV) and provides an automatically generated list of entries, together with the respective contact points. AFAICT your implementation does exactly that: many thanks for that! Stuart, can you please followup to the buglog attaching your code as a patch? About where to host the index, I think it belongs to some page and the -www team is best suited to suggest where. I'm not sure how they handle pages that should be periodically re-generated as this one, but I suspect they have a way to do so. Any hint? Then, I've suggested in the past to have host such a page, instead of the current redirection to. I'm not sure about what are DSA/-www preferences on this matter, hence my Cc. Either way, having the page somewhere under would be a first step. If we also want to host it at as I suggest, we can simply make that a redirect to the specific
https://lists.debian.org/debian-project/2012/07/msg00004.html
CC-MAIN-2016-30
refinedweb
262
68.7
SYNOPSIS #include <tracefs.h> int tracefs_synth_create(struct tracefs_synth *synth); int tracefs_synth_destroy(struct tracefs_synth *synth); bool tracefs_synth_complete(struct tracefs_synth *synth); int tracefs_synth_trace(struct tracefs_synth *synth, enum tracefs_synth_handler type, const char *var); int tracefs_synth_snapshot(struct tracefs_synth *synth, enum tracefs_synth_handler type, const char *var); int tracefs_synth_save(struct tracefs_synth *synth, enum tracefs_synth_handler type, const char *var, char **save_fields);_create() creates the synthetic event in the system. The synthetic events apply across all instances. A synthetic event must be created with tracefs_synth_alloc(3) before it can be created. tracefs_synth_destroy() destroys the synthetic event. It will attempt to stop the running of it in its instance (top by default), but if its running in another instance this may fail as busy. tracefs_synth_complete() returns true if the synthetic event synth has both a starting and ending event. tracefs_synth_trace() Instead of doing just a trace on matching of the start and end events, do the type handler where TRACEFS_SYNTH_HANDLE_MAX will do a trace when the given variable var hits a new max for the matching keys. Or TRACEFS_SYNTH_HANDLE_CHANGE for when the var changes. var must be one of the name elements used in tracefs_synth_add_end_field(3). tracefs_synth_snapshot() When the given variable var is either a new max if handler is TRACEFS_SYNTH_HANDLE_MAX or simply changed if TRACEFS_SYNTH_HANDLE_CHANGE then take a "snapshot" of the buffer. The snapshot moves the normal "trace" buffer into a "snapshot" buffer, that can be accessed via the "snapshot" file in the top level tracefs directory, or one of the instances. var changes. var must be one of the name elements used in tracefs_synth_add_end_field(3). tracefs_synth_save() When the given variable var is either a new max if handler is TRACEFS_SYNTH_HANDLE_MAX or simpy changed if TRACEFS_SYNTH_HANDLE_CHANGE then save the given save_fields list. The fields will be stored in the histogram "hist" file of the event that can be retrieved with tracefs_event_file_read(3). var must be one of the name elements used in tracefs_synth_add_end_field(3). RETURN VALUE All).
https://trace-cmd.org/Documentation/libtracefs/libtracefs-synth2.html
CC-MAIN-2022-27
refinedweb
319
53.21
An Introduction to Functional Programming in Java 8 (Part 3): Streams An Introduction to Functional Programming in Java 8 (Part 3): Streams Streams are an important functional approach that can impact performance via parallelism, augment and convert data structures, and add new tools to your kit. Join the DZone community and get the full member experience.Join For Free Get the Edge with a Professional Java IDE. 30-day free trial. In the last part, we learned about the Optional type and how to use it correctly. Today, we will learn about Streams, which you use as a functional alternative of working with Collections. Some method were already seen when we used Optionals, so be sure to check out the part about Optionals. Where Do We Use Streams? You might ask what’s wrong with the current way to store multiple objects? Why shouldn’t you use Lists, Sets, and so on anymore? I want to point out: Nothing is wrong with them. But when you want to work functional (what you hopefully want after the last parts of this blog), you should consider using them. The standard workflow is to convert your data structure into a Stream. Then, you want to work on them in a functional manner and, in the end, you transform them back into the data structure of your choice. And that’s the reason we will learn to transform the most common data structures into streams. Why Do We Use Streams? Streams are a wonderful new way to work with data collections. They were introduced in Java 8. One of the many reasons you should use them is the Cascade pattern that Streams use. This basically means that almost every Stream method returns the Stream again, so you can continue to work with it. In the next sections, you will see how this works and that it makes the code nicer. Streams are also immutable. So every time you manipulate it, you create a new Stream. Another nice thing about them is that they respect the properties of fP. If you convert a Data Structure into a Stream and work on it, the original data structure won’t be changed. So no side effects here! How to Convert Data Structures Into Streams Convert Multiple Objects Into a Stream If you want to make a Stream out of some objects, you can use the method Stream.of() public void convertObjects() { Stream<String> objectStream = Stream.of("Hello", "World"); } Converting Collections (Lists, Sets, etc.) and Arrays Luckily, Oracle has thought through the implementation of Streams in Java 8. Every Class that implements java.util.Collection<T> has a new method called stream() , which converts the collection into a Stream. Also, Arrays can be converted easily with Arrays.stream(array). It’s as easy as it gets. public void convertStuff() { String[] array = {"apple", "banana"}; Set<String> emptySet = new HashSet<>(); List<Integer> emptyList = new LinkedList<>(); Stream<String> arrayStream = Arrays.stream(array); Stream<String> setStream = emptySet.stream(); Stream<Integer> listStream = emptyList.stream(); } However, normally, you won’t store a Stream in an object. You just work with them and convert them back into your desired data structure. Working With Streams As I already said, Streams are the way to work with data structures functional. And now we will learn about the most common methods to use. As a side note: In the next sections, I will use T as the type of the objects in the Stream. Methods We Already Know You can also use some methods we already heard about when we learned about Optionals with Streams. Map This is pretty straightforward. Instead of manipulating one item, which might be in the Optional, we manipulate all items in a stream. So if you have a function that squares a number, you can use Map to use this function over multiple numbers without writing a new function for lists. public void showMap() { Stream.of(1, 2, 3) .map(num -> num * num) .forEach(System.out::println); // 1 4 9 } flatMap Like with Optionals, we use flatMap to go, for example, from a Stream<List<Integer>> to Stream<Integer>. If you want to know more about them, look into part two. Here, we want to concat multiple Lists into one big List. public void showFlatMapLists() { List<Integer> numbers1 = Arrays.asList(1, 2, 3); List<Integer> numbers2 = Arrays.asList(4, 5, 6); Stream.of(numbers1, numbers2) //Stream<List<Integer>> .flatMap(List::stream) //Stream<Integer> .forEach(System.out::println); // 1 2 3 4 5 6 } And in these examples, we already saw another Stream method, forEach(), which I will describe now. Common Stream Methods forEach The forEach method is like the ifPresent method from Optionals, so you use it when you have side effects. As already shown, you use it to, for example, print all objects in a stream. forEach is one of the few Stream methods that doesn’t return the Stream, so you use it as the last method of a Stream and only once. You should be careful when using forEach because it causes side effects, which we don’t want to have. So think twice if you could replace it with another method without side effects. public void showForEach() { Stream.of(0, 1, 2, 3) .forEach(System.out::println); // 0 1 2 3 } Filter Filter is a really basic method. It takes a ‘test’ function that takes a value and returns boolean. So it tests every object in the Stream. If it passes the test, it will stay in the Stream. Otherwise, it will be taken out. This ‘test’ function has the type Function<T, Boolean>. In the JavaDoc, you will see that the test function really is of the type Predicate<T>. But this is just a short form for every function that takes one parameter and returns a boolean. public void showFilter() { Stream.of(0, 1, 2, 3) .filter(num -> num < 2) .forEach(System.out::println); // 0 1 } Functions that can make your life way easier when creating ‘test’ functions are Predicate.negate()and Objects.nonNull(). The first one basically negates the test. Every object that doesn’t pass the original test will pass the negated test and vice versa. The second one can be used as a method reference to get rid of every null object in the Stream. This will help you to prevent NullPointerExeptions when, for example, mapping functions. public void negateFilter() { Predicate<Integer> small = num -> num < 2; Stream.of(0, 1, 2, 3) .filter(small.negate()) // Now every big number passes .forEach(System.out::println); // 2 3 } public void filterNull() { Stream.of(0, 1, null, 3) .filter(Objects::nonNull) .map(num -> num * 2) // without filter, you would've got a NullPointerExeception .forEach(System.out::println); // 0 2 6 } Collect As I already said, you want to transform your stream back into another data structure. And that is what you use Collect for. And most of the time, you convert it into a List or a Set. public void showCollect() { List<Integer> filtered = Stream.of(0, 1, 2, 3) .filter(num -> num < 2) .collect(Collectors.toList()); } But you can use collect for much more. For example, you can join Strings. Therefore, you don’t have the nasty delimiter at the end of the string. public void showJoining() { String sentence = Stream.of("Who", "are", "you?") .collect(Collectors.joining(" ")); System.out.println(sentence); // Who are you? } Shortcuts These are methods that you could mostly emulate by using map, filter, and collect, but these shortcut methods are meant to be used because they declutter your code. Reduce Reduce is a very cool function. It takes a start parameter of type T and a Function of type BiFunction<T, T, T>. If you have a BiFunction where all types are the same, BinaryOperator<T> is a shortcut for that. It basically stores all objects in the stream to one object. You can concat all Strings into one String, sum all numbers, and so on. There, your start parameters would be the empty String or zero. This function helps make your code more readable if you know how to use it. public void showReduceSum() { Integer sum = Stream.of(1, 2, 3) .reduce(0, Integer::sum); System.out.println(sum); // 6 } Now I will give a little bit more information on how reduce works. Here, it sums: The first number with... ...the sum of the second and... ...the sum of the third and start parameter. As you can see, this produces a long chain of functions. In the end, we have sum(1, sum(2, sum(3, 0))). And this will be computed from right to left, or from the inside out. This is also the reason we need a start parameter — because otherwise, the chain wouldn't have a point where it could end. Sorted You can also use Streams to sort your data structures. The class type of the objects in the Stream doesn’t even have to implement Comperable<T>, because you can write your own Comperator<T>. This is basically a BiFunction<T, T, int>, but Comperator is a shortcut for all BiFunctions that take two arguments of the same type and return an int. And this int, like in the compareTo() function, shows us if the first object is “smaller” than the first one (int < 0), is as big as the second (int == 0), or is bigger than the second one (int > 0). The sorted function of the Stream will interpret these ints and will sort the elements with the help of them. public void showSort() { Stream.of(3, 2, 4, 0) .sorted((c1, c2) -> c1 - c2) .forEach(System.out::println); // 0 2 3 4 } Other Kinds of Streams There are also special types of Streams that only contains numbers. With these new Streams, you also have a new set of methods. Here, I will introduce IntStream and Sum, but there are also LongStream, DoubleStream, etc. You can read more about them in the JavaDoc. To convert a normal Stream into an IntStream, you have to use mapToInt. It does exactly the same as the normal map, but you get a IntStream back. Of course, you have to give the mapToInt function another function which will return an int. In the example, I will show you how to sum numbers without reduce, but with an IntStream. public void sumWithIntStream() { Integer sum = Stream.of(0, 1, 2, 3) .mapToInt(num -> num) .sum(); } Use Streams for Tests Tests can also be used to test your methods. I will use the method anyMatch here, but count, maxand so on can help you too. If something goes wrong in your program, use peek to log data. It’s like forEach, but also returns the stream. As always, look into the JavaDoc to find other cool methods. anyMatch is a little bit like filter, but it tells you if anything passes the filter. You can use this in assertTrue() tests, where you just want to look if at least one object has a specific property. In the next example, I will test whether a specific name was stored in the DB. @Test public void testIfNameIsStored() { String testName = "Albert Einstein"; Datebase names = new Datebase(); names.drop(); db.put(testName); assertTrue(db.getData() .stream() .anyMatch(name -> name.equals(testName))); } Shortcuts of Shortcuts And now that I've showed you some shortcut methods, I want to tell you that there are many more. There are even shortcuts of shortcuts! One example would be forEachOrdered, which combines forEachand sorted. If you are interested in other helpful methods, look into the JavaDoc. I’m sure you are prepared to understand it and find the methods that you need. Always remember: If your code looks ugly, there’s a better method to use. A Bigger Example In this example, we want to send a message to every user whose birthday’s today. The User Class A user is defined by their username and birthday. The birthdays will be in the format “day.month.year”, but we won’t do much checking for this in today’s example. public class User { private String username; private String birthday; public User(String username, String birthday) { this.username = username; this.birthday = birthday; } public String getUsername() { return username; } public String getBirthday() { return birthday; } } To store all users, we will use a List here. In a real program, you might want to switch to a DB. public class MainClass { public static void main() { List<User> users = new LinkedList<>(); User birthdayChild = new User("peter", "20.02.1990"); User otherUser = new User("kid", "23.02.2008"); User birthdayChild2 = new User("bruce", "20.02.1980"); users.addAll(Arrays.asList(birthdayChild, otherUser, birthdayChild2)); greetAllBirthdayChildren(users); } private static void greetAllBirthdayChildren(List<User> users) { // Next Section } } The Greeting Now, we want to greet the birthday boys and girls. So first off, we have to filter out all Users whose birthday is today. After this, we have to message them. So let’s do this. I won’t implement sendMessage(String message, User receiver) here, but it just sends a message to a given user. public static void greetAllBirthdayChildren(List<User> users) { String today = "20.02"; //Just to make the example easier. In production, you would use LocalDateTime or so. users.stream() .filter(user -> user.getBirthday().startsWith(today)) .forEach(user -> sendMessage("Happy birthday, ".concat(user.getUsername()).concat("!"), user)); } private static void sendMessage(String message, User receiver) { //... } And now we can send greetings to the users. How nice and easy was that?! Parallelism Streams can also be executed in parallel. By default, every Stream isn’t parallel, but you can use .parallelStream() with Streams to make them parallel. Although it can be cool to use this to make your program faster, you should be careful with it. As shown on this site, things like sorting can be messed up by parallelism. So be prepared to run into nasty bugs with parallel Streams, although it can make your program significantly faster. Conclusion That’s it for today! We have learned a lot about Streams in Java. We learned how to convert a data structure into a Stream, how to work with a Stream, and how to convert your Stream back into a data structure. I have introduced the most common methods and when you should use them. In the end, we tested our knowledge with a bigger example where we greeted all birthday children. In the next part of this series, we will have a big example where we are going to use Streams. But I won’t tell you the example yet, so hopefully, you'll be surprised. Are there any Stream methods that you miss in the post? Please let me know in the comments. I’d love to hear your feedback, too! Get the Java IDE that understands code & makes developing enjoyable. Level up your code with IntelliJ IDEA. Download the free trial. }}
https://dzone.com/articles/an-introduction-to-functional-programming-in-java?fromrel=true
CC-MAIN-2019-13
refinedweb
2,473
76.32
Data type to manage parsing of patches. More... #include <svn_diff.h> Data type to manage parsing of patches. API users should not allocate structures of this type directly. Definition at line 1219 of file svn_diff.h. An array containing an svn_diff_hunk_t * for each hunk parsed from the patch. Definition at line 1230 of file svn_diff.h. Mergeinfo parsed from svn:mergeinfo diff data, with one entry for forward merges and one for reverse merges. Either entry can be NULL if no such merges are part of the diff. Definition at line 1250 of file svn_diff.h. The old and new file names as retrieved from the patch file. These paths are UTF-8 encoded and canonicalized, but otherwise left unchanged from how they appeared in the patch file. Definition at line 1224 of file svn_diff.h. Represents the operation performed on the file. Definition at line 1239 of file svn_diff.h. A hash table keyed by property names containing svn_prop_patch_t object for each property parsed from the patch. Definition at line 1235 of file svn_diff.h. Indicates whether the patch is being interpreted in reverse. Definition at line 1243 of file svn_diff.h.
https://subversion.apache.org/docs/api/latest/structsvn__patch__t.html
CC-MAIN-2017-47
refinedweb
193
69.58
On 2009-04-15 19:59, P.J. Eby wrote: > At 06:15 PM 4/15/2009 +0200, M.-A. Lemburg wrote: >> The much more common use case is that of wanting to have a base package >> installation which optional add-ons that live in the same logical >> package namespace. > > Please see the large number of Zope and PEAK distributions on PyPI as > minimal examples that disprove this being the common use case. I expect > you will find a fair number of others, as well. > > In these cases, there is NO "base package"... the entire point of using > namespace packages for these distributions is that a "base package" is > neither necessary nor desirable. > > In other words, the "base package" scenario is the exception these days, > not the rule. I actually know specifically of only one other such > package besides your mx.* case, the logilab ll.* package. So now you're arguing against having base packages... at least you've dropped the strange idea of using Linux distribution maintainers as central use case ;-) Think of base namespace packages (the ones providing the __init__.py file) as defining the namespace. They setup ownership and the basic infrastructure needed by add-ons. If you take Zope as example, the Products/ package dir is a good example: the __init__.py file in that directory is provided by the Zope installation (generated during Zope instance creation), so Zope "owns" the package. With the proposal, Zope could declare this package dir a namespace base package by adding a __pkg__.py file to it. Zope add-ons could then be installed somewhere else on sys.path and include a Products/ dir as well, only this time it doesn't have the __init__.py file, but only a __pkg__.py file. Python would then take care of integrating the add-on Products/ dir Python module/package contents with the base package. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 15
http://mail.python.org/pipermail/python-list/2009-April/533200.html
CC-MAIN-2013-20
refinedweb
328
67.04
Hi, How do I call a meteor method on the server side? I am using mdg:validated-method to defined my meteor methods. How do I call a validated meteor method on the server side? Thanks, Hi, How do I call a meteor method on the server side? I am using mdg:validated-method to defined my meteor methods. How do I call a validated meteor method on the server side? Thanks, Scroll down the page and you’ll see it talking about methods on the client and on the server. You call them exactly the same way as you do on the client. However, it’s good practice to only use methods for client->server requests, and extract the shared part to another function or module that can be called directly on the server Meteor Tuts explains better than me: Meteor.call is not a good practice ? It’s good for use on the client, but not best practice on the server. Think of a traditional REST app, would you get the server to make HTTP requests to itself in order to call a method? Or would you just import and call the function? Yes, my question is how to bypass the client call and directly execute the server code? Define function: export function doTheThing(args) { // ... } Call function: import { doTheThing } from '/imports/api/thingDoer'; doTheThing(/* some args */); Use function in method: import { ValidatedMethod } from 'meteor/mdg:validated-method'; import { doTheThing } from '/imports/api/thingDoer'; export const doThingMethod = new ValidatedMethod({ name: 'do.the.thing', validate(args) { new SimpleSchema({ /* some schema */ }).validate(args); }, run(args) { return doTheThing(args); }, }); I had the same troubles when I started. Thanks @coagmano for the great explanation. I think lot of beginners in Meteor have a hard time to understand that on the server side it’s just a simple app and we can do the same things we would do with a simple express app. Maybe we should consider improving the documentation about this as well to demistify the “magic”. True. When I tried Meteor for the first time I fell in love with it. I thought you could prototype an app really fast with it which I still think it is true. Also I don’t see the point from moving the meteor prototype to another framework. The only thing is that Meteor beginners commit many common pitfalls because Meteor is unique and its concepts. Docs should be improved. Luckily the forum helps but docs should be improved. Also I don’t see the point of using meteor with angular, react or vue. I like Blaze. I use angular and react at work but for my personal side project I chose Meteor. Yes, I think we should all play a role. I will see how I can bring my part to the documentation in link with the new roadmap. Also I don’t see the point of using meteor with angular, react or vue. I like Blaze. I use angular and react at work but for my personal side project I chose Meteor. Agree to disagree on this part. I am a big React fan and I would never go back. Yes Blaze is great. But React has so much library, ready-made component etc… and is in continuous improvement. Also using popular library like React makes it easier for people to join the meteor community. They already know React they just have to start the meteor app and they have a ready-to-use backend and database that they can combine with their React knowledge. I think it is very imporant.
https://forums.meteor.com/t/solved-how-do-i-call-a-meteor-method-on-the-server-side/51638
CC-MAIN-2022-21
refinedweb
598
66.23
Last [1. These five types are the ones that immediately come to my mind; I am probably missing some. If you have an example of a commonly-used C# type that is monadic in nature, please leave a comment]: Nullable<T>— represents a T that could be null[2. As I've discussed before, null in a value type is typically interpreted as "the thing has a value but I don't know what it is". That is, there is a decimal that is the net profits for December, I just don't know what that decimal is right now so I'll say "null". It can also be interpreted as "the thing doesn't even have a value". It's not that we don't know the height of the king of France right now, it's that there is no king of France in the first place, so the height of the king of France is null. The exact semantics are not particularly relevant to our discussion of monadic types however.] So, what do these types have in common? The most obvious thing is that they are generic types with exactly one type parameter. Moreover, these types are embarrassingly generic. With the exception of Nullable<T>, all of these type work equally well with any T whatsoever; they are totally “agnostic” as to the semantics of their underlying type. And even Nullable<T> is only restricted to non-nullable value types[3. And<T> could have been implemented to work on any type, and reference types would then be non-nullable by default. We could have a type system where Nullable<string> was the only legal way to represent "a string that can be null". Keep this in mind the next time you design a new type system!]. Another way to look at these generic types is that they are “amplifiers”[4. I am indebted to my erstwhile colleague Wes Dyer for this interpretation of monads; his article on monads was a crucial step in my understanding of the concept.] that increase the representational power of their “underlying” type. A byte can be one of 256 values; that’s very useful but also very simple. By using generic types we can represent “an asynchronously-computed sequence of nullable bytes” very easily; that adds a huge amount of power to the “byte” type without changing its fundamental “byte-ish” nature. So is a monad simply an embarrassingly generic type of one parameter that conceptually “adds power” to its underlying type? Not quite; there are a couple more things we need in order to have an implementation of the “monad pattern”. Next time on FAIC we’ll try to perform some operations on these five types and see if we can suss out any other commonality. I don’t think it’s as easy to imagine a completely-generic Nullable as you imply in note 3. Even in a world where reference types are non-nullable by default, Nullable might not work on itself – that is, Nullable<Nullable<int>> would be invalid. That said, Haskell has “Maybe Maybe Int”, so it’s not completely out of the question. That’s because Nullable by itself isn’t a type. It’s a type constructor, you have to give it a parameter for T before it’s a type. Using the Haskell example you’d have Nullable<Nullable> You could go with something similar to Collections.Generic.List and Collections.List, where Nullable (No T) would be the same as Nullable. Joey’s comment — and yours — have fallen victim to WordPress stripping out things that look like HTML tags. Interestingly enough in the original concept of nullable types a nested nullable type was legal, and there is still gear in the compiler to deal with the situation. Also, the original name of the type was going to be “Optional”, not “Nullable”. In this series I’ll be making the simplifying assumption that nested nullable types are legal, and covering what to do with nested monadic types in a few episodes from now. Optional – sounds so VBish Having the reference types be non-nullable by default would have been so awesome. “string?” is not that difficult to write. Sigh, hindsight. (although there are probably some non-obvious complexities regarding non-nullable reference types… one that comes to mind is fields without explicit initialization – maybe this would have to be illegal unless the type has a parameterless constructor) Indeed, attempts to bolt on non-nullable reference types post hoc have run into problems with initialization. You want a type system to document invariants, and it is difficult to have the invariant that a field of reference type is *never* observed to be null. For example: The question is: can the destructor throw? Sure. If there is a thread abort exception between this.s1 = x;and this.s2 = y;then the dtor observes the uninitialized state of s2. C++ manages to handle that case by just not running the destructors of partially-constructed objects, but obviously that’s only an option with RAII. The obvious ugly solution would be to just allow non-null reference types to be null in the constructor and destructor. To some extent the rarity of useful applications of destructors in C# makes this less ugly… but OTOH that also means that programmers could get away with not knowing that references can be null in destructors for a very long time and write a lot of subtly wrong code. I would love to have non-nullable reference types even with a bunch of subtle issues, though. The subtle issues almost certainly wouldn’t be as annoying to deal with as null is. @Thomas Goyne, The lifetime of an object in C++ starts when the initialization is completed ( ISO C++ Standard – N3376 [basic.life]). In C++, an object can be called “object” just as it begins its lifetime. There is no object if the initialization is incomplete. Then, in the absence of the object, there is nothing to destroy, so the destructor is not called. What do you mean by “but obviously that’s only an option with RAII.”? The object lifetime notion on C++ is independent of RAII. In C#, as Ecma-334, I can’t find a definition of “object”. With fear of being wrong I think is poorly specified. The C# specification does not seek to be either an academic paper or a tutorial; it assumes that the reader has a working knowledge of common terms. You’ll notice that “type” is nowhere defined in the specification either. @Eric: My apologies, I misspoke. I didn’t mean that the “object” definition as OOP should be included in the Standard. I meant that the Standard does not specify the meaning of “object” for C#, or rather, what is the lifetime of an object. When a piece of raw memory is considered an “object” (something that satisfies the invariant) and when an object is again raw memory. The lifetime of an object is an implementation detail of the garbage collector and therefore not in the C# specification. The specification however does have rather a lot to say on the subject of when a local variable is a root of the garbage collector, which obviously impacts lifetime. @Eric: thanks for your answer What about the beginning of the object’s lifetime? Is it defined? I think it should be, regardless of the implementation. In C# the lifetime of an object begins the moment the garbage collector allocates it. This has some interesting implications. For example, suppose you have a ctor that fills in two fields. If the thread is aborted between the assignments then we have a living, orphaned object with half its fields filled in. This might come as a surprise to the destructor! A destructor must be written to assume that *nothing* succeeded in the ctor. This differs from C++, where, as you note, an object is never destructed if it was never constructed fully in the first place. Thanks Eric, very explanatory. It is exactly what I wanted to know. Accustomed to C++, I like to see this kind of definition in standards. I hope in the future the C# Standard is updated. Erik Meijer recently mentioned (in passing) on Twitter that Task is actually a Comonad (). This caused my head to explode, and now I’m trying to figure out what the distinction is. So far the information I’ve found on Comonads doesn’t help much. Is a Comonad a specialization of a Monad or a different thing all together? Does anyone know? A comonad is the dual of a monad; this is analgous to how a covector is the dual of a vector in graphics (think how a plane is the dual of a line!). Monads generate effects, and comonads evaluate effects. We are rapidly getting out of my depth here, but briefly, a comonad is like a monad that “goes backwards”. As we’ll see over the next few episodes, what characterizes a monad is that (1) you can take any value of type V and make a monadic value of type M<V>, and (2) you can take a function from V to M<W> and turn it into a function from M<V> to M<W>. What characterizes a comonad is that (3) you can run (1) backwards: you can take any value of type M<V> and extract a value of type V, and (4), you can run (2) backwards: you can take a function from M<V> to W and turn it into a function from M<V> to M<W>. Because you can do all four with Task<T>, it is both a monad and a comonad. (Do you see how to do all four things with Task<T>? If not, we’ll cover (1) and (2) in the next few episodes.) Actually that makes a startling amount of sense…sorry for the diversion. I’m looking forward to the future episodes. I think the best way to understand comonads is to think of them as structures that store a value together with some context. The “counit” operation “C<T> -> T” gives you a way to extract the current value and the “cobind” operation “(C<T> -> R) -> C<T> -> C<R>” gives you a way to propagate the context: given a computation that can turn T in context into a value R, we can build a computation that takes T in context, calculates R and wraps it into the same context. Together with a colleague, we’ve been working on things related to comonads in programming languages recently, so here are some resources that you might find interesting (though they are quite theoretical): * * Strictly speaking, there are not that many interesting comonad. A comonad can store T together with some state. It can also keep a non-empty list of T values (it has to be non-empty, because you need to be always able to extract the value!) Treating Task<T> as a comonad is interesting, but I think it might actually be a bit misleading. The problem is that the Value property (takes Task<T> and gives you T) is not actually a _pure_ computation – it does not always return the value immediately, but sometimes blocks. This means that if you treat Task<T> as a comonad, you have to ignore blocking (and all timing of the computations). Since blocking is a key aspect of tasks, this feels a bit like cheating… Thanks for the links, I’ll check them out. Another interesting aspect of tasks is that of course they need not return a value at all; they can throw an exception when asked for their value if the task failed or was cancelled. Of course, it is also the case that a non-void-returning method can throw. I agree that your concern is valid, but I say that if Erik Meijer is comfortable calling them comonads then I am too. :-) The fact that Task is a comonad can be proved using “proof by eminent authority” :-) (see). More seriously – I think Eric’s point was that many monads are not, strictly speaking, monads, because they do not obey the monad laws if we take into account non-termination (and that is perhaps similar to ignoring exceptions or cancellation when talking about Task as a comonad). This is definitely an interesting perspective and I think some people see lazy evaluation as another comonadic property, which might be related… But if you ignore exceptions and cancellation, then the “ContinueWith” method is really more like “Select” (with the only difference that it gives you a task that is always completed – and thus has a value – rather than directly the value). This also means that if you have a structure with “map”, “return” and the dual of return (“coreturn: C<T> -> T”) then you can always define “cobind” (just by using “map” and “return”) and you get something that looks like a comonad, but it is a question whether the “cobind” operation (representing context propagation in our work) can give you something useful when it is derived from other primitives. The “counit” operation certainly adds value, so there is something interesting there… If one is allowed to create arrays of arbitrary types, then reference types pretty much have to be nullable, since there is no sensible default value for them other than null. On the other hand, if one were allowed to define value types which had custom conversions to/from `Object`, one could create value-type wrappers for immutable reference types which would behave as non-nullable versions of those types. Such a design might have been good for “String”, since it would have allowed the default value of strings to behave consistently as an empty string, as used to be the case in COM. Also, I for one am in the camp that thinks that either “Nullable(Of T)” should not have constrained T, or else there should have been a type which behaved somewhat similarly but without the constraint on T, and the unusual boxing behavior (unboxing behavior could be as for “Nullable(Of T)”) replaced by a special AsBoxed property. I don’t see any semantic difficulty with nested nullables; if I have a “Dictionary(Of String, Nullable(int))” and a TryGetValue method that returns a “Nullable(Nullable(int))”, then if the return value of that method reports false for “HasValue” it means that there was no entry for the requested string; if it returns “True”, but “Value.HasValue” returns false, that means the string is associated with a null value. Nothing complicated. Tuple is also a monad type. Good point; it is essentially the identity monad. A 1-tuple is. A 2-tuple (partially applied) is the Writer monad, collecting state as you move along the computation. You also need some way to combine the other component in the tuple, though. In Haskell this is done with a Monoid contraint, providing you with an ‘empty’ element (for return) and a binary operation to combine two elements (for bind). My favourite .NET monad is the Reactive Framework `IObservable`. This is a very powerful library and I use it all the time. Indeed, it was designed by Erik Meijer, who is an expert on monads to say the least. The problem with dealing “I don’t know what this value is exactly, but it’s there” with NULL is that, well, it doesn’t work. If you make NULL inequal to itself you lose the ability to express “I don’t know exact values of this two things, but I know that they’re equal”. Besides, a value which isn’t equal to itself breaks “=” so hard, it hurts my mathematician’s brain. And if you make NULL equal to itself, lo and behold, all unknown values are equal to each other. One way to resolve this is to use three buckets: “Things with values I know”, “Things with values I don’t know (but those values are there)”, “Things without the values”. Then, of course, the fourth bucket “Things about I don’t know even whether they have those values I’m interested in” appears, but at least you’re more or less prepared to it. And to sort all this with one additional NULL marker? It’s plain impossible. When you try to model not only the world, but also the state of your knowledge about the world, you have to distinguish those to things clearly. Okay, that’s got really off-topic. As for another monad, ParseNext<T> which tries to extract a value of T from a stream, then moves forward. Hey, that’s a full-blown class with an inner state! Great post, as usual. Monads are so elegant, working with them is always very intuitive. I feel that they are often the bread and butter of a well designed API. Seems like you’ve been more and more incorporating Nitpicker’s Corner. :) Eric, big fan of your blog. Just out of curiosity are there blogs that *you* read the same way we read your blog? Also do you have any recommendations for blogs similar to yours (dealing with compilers and language constructs and languages in general) Thanks for you help and can’t wait for the next post !! By “the same way we read your blog” do you mean hitting “F5-F5-F5-F5…” until the new post shows up? yes, that is exactly what i mean ! i am on here atleast 5 times a day looking for a new post, good thing there is plenty of posts for me to catch up on. Why not subscribe to the comments RSS feed then? :) I havent found an RSS client that I enjoy..any suggestions for Chrome? I’ve often wondered if the jQuery Object is a Monad. The API feels mondaic at times with it’s composibility… I’ve read many many articles on monads and never been able to get all the way to the end, but so far this is making perfect sense! Would it be safe to describe monads as being similar to decorators in the decorator pattern? Is one a subset of the other or do they just have some similarities? Pingback: Weekly Digest 3 | chodounsky I’m sure you know this, so I’m just clarifying one your statement. “IEnumerable — represents an ordered, read-only sequence of zero or more Ts” It’s not obvious, but nowhere it is said that IEnumerable implementations should guarantee order so it should not be relied upon. Parallel LINQ is one example where you cannot rely on order of IEnumerable. IEnumerable is ordered in the sense that there is an implicit order given from the fact that you can say “first item returned, second item returned, …”. It is not ordered in the sense that you can re-run the enumerator and expect the same order. It isn’t even a guarantee that you’ll get the same set of values whatsoever, in any order. I agree, that’s a very weak sense of order, and I prefer the notion of IOrderedEnumerable, to provide such guarantees. I’d also love IFiniteEnumerable, so that people don’t attempt to materialize a non-terminating sequence. Heya my business is with the major moment below. I ran across this particular mother board so i realize its actually useful & it helped me to out there a good deal. I am hoping to offer a very important factor again plus aid other folks like you helped me to.
http://ericlippert.com/2013/02/25/monads-part-two/
CC-MAIN-2014-35
refinedweb
3,272
69.01
Wolfgang Rohdewald wrote: > On Donnerstag 30 Juni 2005 16:53, Klaus Schmidinger wrote: > >>IIRC we already had this discussion some time ago. >>The point is that you're not supposed to call any of the skin >>functions from a thread. These functions are only supposed to be called >>by VDR itself. > > > how else can a plugin display a message or ask a question? since > cInterface::Confirm() also calls Skins.Message(), I suppose Confirm() > is also illegal for plugins? > > this is how muggle has always been doing this: > > #if VDRVERSNUM >= 10307 > Skins.Message (mtInfo, buffer,duration); > Skins.Flush (); > #else > Interface->Status (buffer); > Interface->Flush (); > #endif > > and > > if (!Interface->Confirm(tr("Import items?"))) > return false; Sorry, I was a little too vague here. What I meant to say was that these functions shall only be called from the _foreground_ thread (the one that VDR's main loop runs in). Klaus
http://www.linuxtv.org/pipermail/vdr/2005-June/003323.html
CC-MAIN-2015-22
refinedweb
148
67.45
The Doc Dialer: IE 5.5's Replace Function The Doc Dialer IE 5.5's Replace Function In this page we explain the enhanced replace() function, supported by Internet Explorer 5.5 and above. On the previous page we showed you what does extractFileName() do: function extractFileName(empName) { if (nameCode == 1 && versionCode >= 5.5) { var regExp = /(\w+)\s*(\w+)/g; return empName.replace(regExp, matchingFunction); } else { blankPos = empName.indexOf(" "); firstName = empName.substr(0, blankPos); lastName = empName.substr(blankPos + 1, empName.length - blankPos); return firstName + lastName; } } The objective of this function is to generate a file name out of a president's first and last names. We first define a regular expression: var regExp = /(\w+)\s*(\w+)/g; The regular expression will match a string that includes a word, followed by a zero number or more of blanks, and ends with a word. The flag /g allows for more than one matches of the above sequence. The replace() function takes two parameters: the regular expression and a handling function ( called here matchingFunction()). Our matchingFunction() is defined as: function matchingFunction(matchedString, subMatch1, subMatch2, matchPos, source) {return (subMatch1 + subMatch2) } A submatch is found according to the parenthesis in the regular expression. The first pair defines subMatch1, the second pair defines subMatch2, and so on. According to the regular expression above, the first pair encloses the first word and the second pair of parenthesis encloses the second word. For example, subMatch1 for Bill Clinton is Bill, while subMatch2 is Clinton. The parameter matchPos is the position within the source string that the match was found. The parameter source is the source string. The number of pairs of parenthesis in the regular expression must match the number of submatches parameters passed to the matchingFunction() above. You can do anything inside the matching function. At the end you have to return a string. In our case above, we return a concatenation of the submatches, yielding the president's first name concatenated with his last name. The replace() function takes the string returned by the matching function and substitutes it with the substring matched by the regular expression. In our case, the regular expression matches the whole string of the president name, and the replace() function replaces it with a concatenation of the first and last names. Just a reminder how to find the Internet Explorer version number: function bVer() { // return version number (e.g., 4.03) msieIndex = navigator.appVersion.indexOf("MSIE") + 5; return(parseFloat(navigator.appVersion.substr(msieIndex,3))); } Extracting the version number is not that trivial. The number at the head of the navigator.appVersion string is still 5.0. We need to get deeper into the string and find the substring "MSIE". The version number appears just after it. Produced by Yehuda Shiran and Tomer Shiran Created: February 14, 2000 Revised: February 14, 2000 URL:
http://www.webreference.com/js/column57/9.html
CC-MAIN-2015-27
refinedweb
472
58.89
Join devRant Search - "dump" -.48 - /.17 - - Unbelievable... My company bought me a new laptop. It has 2 512 GB SSDs. Our IT set it up with windows 10. ON BOTH SSD. OM fucking G. How dump you have to be to install windows 10 two times in the same machine? What kind of mental illness is this?41 - Just saw an ad: "I learned to code in 2 months thanks to X School and now I'm working at Google!" Seems like now is the right time to dump your Google stocks.3 - - Me(m) vs Apple(a) m - hey apple! a - m - apple? a - oh yeah, who are u? m - umm, titan? a - titan who? m - titanlan- .. umm nevermind . hi , i am a developer :D a - developer ? hah.. get out. m - but wait, I want to develop apps for you! I have been developing android apps for last one year and i love mobile dev! wanna talk more on this ? a - umm.. ugh ok. so you wanna develop apps? m- yes!, i am doing great at java an-.. a- yeah wait. we don't have that in here. we use swift m -Oh. no worries , the principles are the same i will watch some free youtube vids and have a plugin for studio or vsco-.. a- yeah wait you can't do that too.we don't have plugins m - Really, no plugin? then where do people develop ios apps? a- xcode m - Oh , how stupid of me , an IDE of course. anyways i can simply install it in my windows or linux an-.. a - nope, you can't do that. m - what? then where does it run? a -macOS m -Oh, then surely you might have some distro or- a - nope, buy a mac. pass $3000 m- wha-? i just want to run your bloody IDE! a- oh honey, your $3000 will be totally worth it, you will love it! m- but i haven't even started making an app, leave alone publishing it. a- oh, that will cost you another $100 . plus if you wanna test your apps, make sure it runs in our latest , fragile iphones otherwise we won't publish it. that will cost another $1500 m- what? but I already have a fine , high tech laptop and a smartphone! a- yeah you can dump that FML. how the fuck is apple living and thriving? lots of selfish motives and greeds i guess? because i don't see a single place where they are using the word "free" or "cheap" .26 - dump is taking forever" - things that sound strange outside of a dev environment. What are some other good ones?34 - - PSA: Please don't dump 10GB of your personal photos on your company's shared drives. Especially dont have the photos include such things as nudes and pictures of your social security card. -- kthx10 - - - Everytime they force me to add and test stupid features. I usually end up making my own version, which they dump almost every time.2 - - - - - how to make a feature request 1. dump Db table with 153 column to Excel 2. print! 3. circle column 47 on page 3, scribble feature description 4. scan! remember to use proprietary file format no one has 5. new e-mail, add "VERY URGENT!!!" to subject line 6. write "will call, discuss details monday" 6.a. attach proprietary-scanned-excel-dump-feature-description (optional) 7. postscript: deadline wednesday!! 8. wait for tuesday 9. send! ...3 - Almost everyone from one of my previous companies. I had a manager who likes to "break" people and will do almost anything to humiliate and make someone cry especially if they're new. One time, she called us into the room while she's tearing down a new developer. No reason for us to be there aside from watch the poor man cry. It always has to be known that she is someone to be feared and things have to be done exactly her way, that she's only angry, controlling, and explosive because she cares so much. Abusive mother love intensifies. The senior manager is obsessed with extra-curricular bullshit that she actually gets furious when new hires don't participate or win dancing competitions. Yes, dancing competitions. Also, costume parties. The only time they left me alone was when I made multi-colored cookies for the children visiting during a Halloween party. I would have poisoned them if those kids weren't there. The way mentors are picked out for each new junior developer is by having them perform (sing, dance, act, whatever) and the old members choosing whose performance they like the most. Introverts and people who still have their senses that don't want to participate are immediately demonized as "not a team player". The amazing pregnant HR who decided to hate me for no fucking reason but treated my colleague very well. Rushed me for requirements when they already rushed my start date and know I only had a weekend to process them and all government offices are closed. Gave zero directions and then blew up when I didn't manage to read her mind. Another HR had to chime into one of our email threads because the bitch is crazy. After she has given birth, she's all nice and sweet to me but all I see is a monster. People dump their dirty dishes in the pantry's sink and sometimes leave the toilet floor wet where it shouldn't be (far off the bowl). Some motherfucker forgot his lunch in the locker, brought it in the workspace, and kept sniffing it minutes after people have complained that the shit's already expired. Most members are obsessed with people's salaries. There was a time when I printed my papers for the training visa. We are all required to do this and the way our printers are setup at work is messed up so those documents have to be put in the shared drive first before getting printed. There's a small window between the time it's printed and the time I delete it but they still managed to peek into how much I earn. I always get that "you earn so much, more than us" thrown at my face like it's a bad or unfair thing and only in the third year did someone confess to looking at those documents without my knowledge. Never-ending gossips and stalking employees' social media. Senior managers and managers join in the gossip and slander of their own employees. That giant guy who likes to touch women's head hair (have to be specific) if he finds them attractive. The discrimination, man. Touch everyone and touch yourself the same way your uncle touched you in all the right places, you maniac. People who are constantly bragging about overtimes and shaming those who leave on time. And many more. I will never forget them.27 - - Like a bad relationship Be really excited for the first month or so then once the new car smell starts to fade, lose interest and dump it.3 - I'm afraid of getting dumped and it's not because i fear rejection or being alone, it's just because the stack trace will be HUGE!5 - - +++ Microsoft switches to the open-source Chromium engine for the Edge browser +++ On December 6th, Microsoft announced that they will dump their own Edge engine and replace it with Chromium, an open-source browser engine developed by Google. This way they are promising the ~2% of global internet users who prefer Edge over other browsers to experience a better web experience. The about 2% of market share is one of the reasons Microsoft decided to stop developing their own engine. It's just not worth it. Joe Belfiore, corporate veep of Windows, said they also want to bring Edge to other platforms, like macOS, to target more audiences. Web-Developers, like myself, will most likely have the most to gain. Less browsers to target means less incompatibility issues. There are a lot of HTML5 features that the Edge engine doesn't support... The new Edge won't be a UWP app, in order to make it usable outside of Windows 10. Instead, it will be build in accordance with the Win32 API, so we can even expect support for older Windows versions, like Windows 7 and 8. A preview release is planned for early 2019. Because they are switching to Chromium and the Win32 API, Microsoft is hiring new developers! So if you always wanted to work at Microsoft, now is your chance! That's it! Thanks for reading! Source: - - So we hired an intern and his first task was to change a few things in email layout for our client, which is an investment bank. I told to one of my developers to make his local database dump and setup the project for an intern. When intern completed the task, my developer thought that title "Dow Jones index crashed" was pretty funny title for a test. What he didn't thought through enough, is that he forgot to configure fake SMTP server and he had production database dump with real email addresses. I had really awkward 20 minutes conversation with our client. Fuck my life.4 - - Am I the only one who figures out solutions to complex issues only when peeing, bathing or taking a dump? 😂8 - Taking a dump and showering are my number one non dev activities. They help to clear your head and when your head is empty you will get the best ideas and solutions.4 - - ... when you ask someone for their IP and you get a 10.x.x.x back ... followed by a dump of their ipconfig, showing this IP as their VirtualBox Host-Adapter ... and this someone is a developer for web-applications ...3 - When you write something cool but the problem can be solved in an easier way and you have to dump it 😭1 - - Client: the app is slow me: can you upload a thread dump to the ticket? Client: here *uploads 2GB catalina.out.gz - Coolest project? Well, one time I had to take a dump while I was coding so I took the computer to the toilet with me and that was pretty exciting5 - A new online store for custom PC has opened in Switzerland. For overclocking they want CHF 10 (around 9.50 $) more for every MHz. I know that some people gonna buy this shit and pay for that. If that people knew that you only need to access BIOS/After Burner for it 😂😂😂😂 #BestSpendMoney10 - - Me, hacking the sunxi kernel to access gpio on my orange pi: My friend: "oh, a raspberry, are you using python for that?" Me, looking up from opcode dump: "you can use python for this?"3 - Uhm, alright, but how will you fix them then? (no, there seems to be no automatic crash dump or calling home)7 - Added a mysql-dump file by misstake in a git commit .... 250MB explains why it took so long to push it to the gitlab server ... - Feeling awesome after migrate everything from WordPress to Laravel. Kill WP. Fuckkkkkkkk youuuuuuuu. 😣3 - - Testing hell. I'm working on a ticket that touches a lot of areas of the codebase, and impacts everything that creates a ... really common kind of object. This means changes throughout the codebase and lots of failing specs. Ofc sometimes the code needs changing, and sometimes the specs do. it's tedious. What makes this incredibly challenging is that different specs fail depend on how i run them. If I use Jenkins, i'm currently at 160 failing tests. If I run the same specs from the terminal, Iget 132. If I run them from RubyMine... well, I can't run them all at once because RubyMine sucks, but I'm guessing it's around 90 failures based on spot-checking some of the files. But seriously, how can I determine what "fixed" even means if the issues arbitrarily pass or fail in different environments? I don't even know how cli and rubymine *can* differ, if I'm being honest. I asked my boss about this and he said he's never seen the issue in the ten years he's worked there. so now i'm doubly confused. Update: I used a copy of his db (the same one Jenkins is using), and now rspec reports 137 failures from the terminal, and a similar ~90 (again, a guess) from rubymine based on more spot-checking. I am so confused. The db dump has the same structure, and rspec clears the actual data between tests, so wtf is even going on? Maybe the encoding differs? but the failing specs are mostly testing logic? none of this makes any sense. i'm so confused. It feels like i'm being asked to build a machine when the laws of physics change with locality. I can make it work here just fine, but it misbehaves a little at my neighbor's house, and outright explodes at the testing ground...8 - - - Being a developer in my country is great. We have Sam Adams fountains instead of water fountains everywhere, triple - double bacon and duck fat fried cheeseburgers with Twinkie buns, massive desktops that burn coal and dump pure toxicity into the atmosphere. We sit on chairs made from the carcasses of soon to be extinct animals, and instead of rubber ducks, we have majestic bald eagles screeching their encouragement as we pound out our buggy ass code. But we have the best bugs, don’t we folks - Friend of mine created a blog from scratch... You could create a post, by just sending a POST request (no authentication required!).... As an additional bonus: you could dump full unfiltered HTML in a post, which was then executed... Please kill me5 - - I was scrolling through MySQL dump, at a point I forgot what I was looking for. Then it reminded me that 'I was just seeing blonde, brunette, redhead.'1 - people ... -4 - - #AndroidDev protip: 1. Dump Android Studio and use Sublime for a week 2. Realize you don't actually know how to write java anymore 3. Cry - - $ mysql -uroot -p > file.sql instead of $ mysqldump -uroot -p > file.sql And not checking the result file before reinstalling my server 😭😭2 -️ use a library and it gives me some strange error message. No problemo, just file an issue on GitHub asking the maintainer if I'm plain stupid or the lib actually has a flaw. As it was a question, I have not posted a dump and all the shit. Maintainer responds with a snarky comment about his crystal ball being broken and I have to submit a log, a dump, debug information and a bunch of other stuff. Well, what choice do I have, I collect all the requested information, create a wall of text comment, all nicely formatted. And the issue ends here. Myths say, the maintainer got asked to join Elvis on Mars. I mean, why do you ask all the shit from me in a unprofessional manner just to stop answering? Just say "I have no clue why it behaves like this" and I know whats playin. But that's just ... sad.6 - - - - You thought real fear is deploying to production friday afternoon? Hah nope. Real fear is forgetting to flock(); a public toilet door while doing a dump();1 - Want maximum efficiency in python? def say(text): print(text) You save 2 keypresses everytime you print - Making a loop is like taking a dump: If you dont manage all the shit, you'll end up with an overflow. - Moved to Australia, because it's cooler over there. Pissing on outback stones right now. Who's your daddy?17 - Searching for random Linux bug.. finds Gist with exact match (thinking, WOW, thanks Google!).. It's a 2000+ line log dump. : |3 - Our relationship is like a diode, you take and never give (I demand sex as compensation). Btw, I really used this but with simpler words with two girls who wanted a ride to go and have fun with other boys in the clubs... My best friend gave them a ride for over a year. When we denied more, they started calling me... Wonder why. Took them one time, asked for gás money. Only called me one-month later, didn't ask for gás but said I wanted some snug and fun in return... OK, you can have us both (OH YEAH!!!). On the club, they go for the muscular guys, leaving me alone. When I got tired, got to one of them and said, ask your friend a ride home because I'm going now. (they ran for my ride since the guys they picked were all pricks and would probably dump them somewhere). Never called me again... Told what I did to my best friend, next time they called he demanded sex for both of us, never called him again. And that's how you fuck opportunistic people. Fuck them.23 - - Writing down your thoughts makes your brain feel free because it doesn't have to keep them inside itself. It's kinda like talking a huge dump😂2 - Worst part of being a developer is having to educate IT Admins on how to do their job without fucking up mine! Yes, just delete the Intrusion detection system ...why not! lets burn the office down while we're at it! here wanna take a dump in my coffee?3 - - - - Not only Windows can show this "strange" error messages: Today I got this beauty while importing an SQL-dump. (Translation: "Error on import: error on statement #1: not an error. Execution will be aborted and the db will be reset.") - >Be client >Have an issue with incredibly slow webpage load time >Blame memcache issues So... I look into the problem. Yes, the page either loads up fast, or times out. So, into the logs I go. Webserver is fine (except the timeout), PHP though... Error log is fine (just notices), but slow log shows the issue is the database (of course... its always the database... ugh) So, checking the database, there is one ugly query that seems to be an issue. 5 joins and a huge where condition. So I run EXPLAIN on the query and... Proceed to bang my head against the wall. OF COURSE ITS SLOW YOU FU******, NONE OF YOUR TABLES HAVE ANY INDEXES. What do they expect when the database has to always go down the whole table and do everything in memory, until it runs out and has to dump it all on disk and work with it there. Ugh... Some clients... - Not really a rant but a hear the word 'dump' quite often during the work day and am amazed at how few people giggle at that.2 - - - Has anyone here seen a mainframe error dump? It's an 8000 line wall of text with maby 7995 lines of fucking jibberish hiding the cryptic fucking error message... Why the fuck can't they just put the interesting shit at the top of the file insted of hiding it in nonsense!?1 - >uni project >6 people in group >3 devs (including me) I am in charge of electronics and software to control it as well as the application that will use them. 2 other "devs" in charge of a simple website. Literally, static pages, a login/registration, and a dump of data when users are logged in. Took on writing the api for the data as well, since I didn't fully trust the other 2. Finished api, soldered all electronics, 3d printed models. Check on the website. Ugly af, badly written html and css. No function working yet. Project is due next week Thursday. Guess who's not having a weekend and gonna be pulling 2 all nighters2 - - CEO tells us to delete every Trello boards, and make a big smoking dump of shit of a Trello board out of them, by adding every project for every plaform there, because he is a control freak, and he NEEDS to see every project all in one, because he’s “tired” of switching between boards if he wants to see the completion of a project. Like duh, this is the job of our 2 PM’s, but whatever you dumb fuck. Chaos is coming.....i’m done4 - One good thing about taking a dump at work is that no matter how senior or junior you are, the sounds are pretty much the same - - - Co-Worker: How can I see what's linked to x variable in the database for this website? [we can't see the actual back end] Me: Do a var dump... Co-Worker: but what var do I dump? - When coworkers have a var dump on a page in production.-_- I aint saying shit because everytime I mention something they do wrong I get assigned with fixing it. -_- - "First of all, Pascal is the best. Everything you learnt beside Pascal, dump it" - Wise words from ComSci teacher Ok. I'm good.6 -... - !dev_related Finally hit chapter 6 of my book's rough draft! Feels good to be making good progress, had to do a bit of an info dump on the readers but still need to expand everything. Might even think about publishing in the future :-D.2 - - When you put your hands together and start chanting almighty compiler gods to have mercy on your soul. - By taking one 30 minute dump in the toilet per day to relax and read other people's code on my phone - - - Remember when the documents folder on windows was called "my documents"? Why do developers think they have the right to dump their shit all over the place?6 -* -. - Why do popular media paint "programming" as easy... this is a very big deciet. please let stop this lies. programming is not for everyone, not everybody can code. and please dump the f**king "Girls can code" slogan. there is no need for the hype.13 - - That shitty moment when you are reverse engineering an app (LINE), but can't find any useful hints. Web analysis didn't help. Decompiling the windows executable also didn't help. Testing the app on different behaviour with python scripts didn't help. Analysing the android app on windows with the jadx decompiler and other decompiler didn't help that much. BUT today it worked. I did use a paid "Dex dump" android application. I found some methods that the app receives from the servers with a thrift protocol. Now I just need to find the right parameters to be finally able to make a bot. Hehehe. That was a hard way, but it paid out. I did learn so many things. It took me like a whole year.5 - .9 -. - Today someone took a shit and didn't flush... Normally you have your typical candidates, who are pissing standing in the stalls although there are free urinals, people who don't wash their hands after pissing or just splash like 3 fingers with water. Even not washing hands after taking a dump, which is pretty disgusting... But today? Some dude in the stall next to me took a shit, wiped his butt... and went away... No flushing no washing hands... Wtf is wrong with people?7 - - -. - Yesterday was a horrible day... First of all, as we are short of few devs, I was assigned production bugs... Few applications from mobile app were getting fucked up. All fields in db were empty, no customer name, email, mobile number, etc. I started investigating, took dump from db, analyzed the created_at time stamps. Installed app, tried to reproduce bug, everything worked. Tried API calls from postman, again worked. There were no error emails too. So I asked for server access logs, devops took 4 hrs just to give me the log. Went through 4 million lines and found 500 errors on mobile apis. Went to the file, no error handling in place. So I have a bug to fix which occurs 1 in 100 case, no stack trace, no idea what is failing. Fuck my job. - - - Dear Webdevs who are able to use bootstrap and CSS properly. I appreciate ur work but hell fucking shitfuck I can't get my mind about it to work. FUCK this piece of container loving table ruining and not aligneable dump4 - Copies the project from one pc to another,Android studio be like i should dump all the errors unnecessaraily even though there are none😒4 - ##!9 - life has been draining me for the past few years...suggestions? move somewhere new, possibly get rid of most of what i have and start over? dump all the money i have left into investments, which i see as a random lotto, like a business, real estate, or literally lottery? bail and become a vagabond? alternative - Learnt Python Fundamentals while taking a dump... It might have been due to being in a vulnerable state during that time but I am kind of enjoying the simpletons' language.. Must go to a Doctor for a brain examination... 🤨 I should be concerned1 - - Have 4 GB micro SD card for my filesystem project. Every search I do on the hex dump takes 5 minutes (literally) Exported hex dump to text Now have 2 9GB text files Gonna try to import it into mysql for faster querying, wish me luck3 - - Took my laptop in the toilet to listen to zoom lectures and i took such a huge and loud dump that besides my mic being unmuted they could also smell - - - So we are migrating between different hosts so I write a nice script to move two pieces of encrypted data between the two, one over ssh, the other over https to two separate end points. One boss says can’t do that as it is insecure because they come from the same script! Another boss objected that I wrote a script to dump databases in bash rather than like his in PHP even all his PHP does is run the same bash commands, I just took out the middleman and made it faster. #baddayintheoffice #anyonelookingforaseniordev1 - The best solutions to programming problems come to me mainly when I am away from computer.... especially when taking a dump🙈2 -. - - When you start your internship with zero knowledge of web development but then becomes responsible of back end web development. #learningthehardway #mybossCCgonnakillme1 - This afternoon I had my first close encounter with a core dump, while working on my C++ simulation. It was brutal and I probably opted for the less efficient solution to avoid the problem, after hours of fighting. But hey. I'm alive and that is what matters most. - - Personally I like to use very obscene phrases as passwords, just in case some saves it in text. When they read the "dump file" maybe they will be like well we will not be fucking with this one, that person is just sick.1 -!... - Who the fuck invented the glorified pile of shit people call laravel? Is this actually used in PROD for anything else than load testing a monitoring server by creating loads of error messages? OOP exists for a reason, not to create bazillions of classes with static methods. Dump that shit ffs!7 -. :(17 - -... - - I spent an entire morning trying to figure out why my development branch of my web app was taking a dump on itself after I rolled it back to production. Only to realize that a config file wasn't in the folder. So I threw away all my changes for nothing. -. - - Señor Zuckerberg's Twitter and Pinetrest password was 'dadada' 😂 jajajaja 😂 - Started new job as software developer in a financial institution... Have to learn c#, c# devs here any good tips??7 - - The pain of creating a data pipeline in AWS to dump all your DynamoDB tables into S3 is something I don't want anyone else to go through..Why don't you have a copy button..😭😭 - I hate when programming books have shit code examples. Just came across these, in a single example app in a Go book: - inconsistent casing of names - ignoring go doc conventions about how comments should look like - failing to provide comments beyond captain obvious level ones - some essential functionality delegated to a "utils" file, and they should not be there (the whole file should not exist in such a small project. If you already dump your code into a "utils" here, what will you do in a large project?) - arbitrary project structure. Why are some things dumped in package main, while others are separated out? - why is db connection string hardcoded, yet the IP and port for the app to listen on is configurable from a json file? - why does the data access code contain random functions that format dates for templates? If anything, these should really be in "utils". - failing to use gofmt These are just at a first glance. Seriously man, wft! I wanted to check what topics could be useful from the book, but I guess this one is a stinker. It's just a shame that beginners will work through stuff like this and think this is the way it should be done.2 - I just posted dump of my notes on linear transformations, any feedback is appreciated! ◾️▫️ -. - Please help me before I get mad, First day with Linux Mint. Objective: Make a 3Tb Hdd Read and Write, Right now I can use it only to Read. Finally Installed Linux after some bumps (bad ISO). I have 2 HDDs, the SSD with Linux and a 3Tb HDD Right now the 3T has 4 partitions, one for windows, 3 for personal use with lots of personal stuff I can't lose. I've been looking for videos, tutorials and the maximum I got was to had one partition mounted as a folder =f0a65631-ccec-4aec-bbf5-393f83e230db / ext4 errors=remount-ro 0 1 /swapfile none swap sw 0 0 UUID=F8F07052F07018D8 /mnt/3T_Rodrigo ntfs-3g rw,auto,users,uid=1000,gid=100,dmask=027,fmask=137,utf8 0 0 </code> What am I missing? PS.: Next: Make fingerprint work in Linux16 - Today i got so fucking depressed and discouraged because whenever i tell people "i am a software engineer" or "programmer" especially fking girls they just fuckin leave. They dump you. Imagine seeing someone with both ur eyes, both of ur shits seeing someone go from high interest and then watch their interest drop D E E P D O W N L I K E F U C K I N G T H I S All the fckin way back to 0 if not even below that.... How the fuck am i fuvjin g supposed to fking feel. What am i supposed to think. This is such a fucking bullshit. I am fking wordless. Hhhhhhhhh Each time this shit happens i question myself if i should regret wanting to be a software developer or not7 - How do I choose the right distro? I have a new DevOps job. For that I may switch to a Linux distro of my choice. I am struggling to make the choice between Manjaro and latest Fedora WS. I primarily use XFCE and bspwm. And my job involes a lot of automation tools, docker, kubernettes, python3, tcp dump and some shell scripting.17 - - - - I love Django, the philosophy behind it and how smart it is. I hate the Python ecosystem. virtualenv and pip is really dump e.g - Reviewing a PHP constant dump I did a while back. Thinking about playing a small prank on the intern by replacing some of the ones used in a project with equivalent, but obscure, ones defined by PHP. For example: if ($test == CAL_EASTER_ROMAN) ... $meta = get_post_meta($post_id, $meta_key, FTP_ASCII); ... - So nice, a good, well structured DTAP environment with Acceptance and Test containing a recent dump from Production so that bugs can be reproduced properly........ *wakes up* - Do you believe in QA who only tests the application as a user i.e just blackbox testing of clicking here and there.? The QAs in my company doesn't have a clue on how the shit works and most of them don't even understand a line of code. I feel that it's really important to test the application from the web api level as well to test out all the complex business logics which may not be feasible from the UI.15 - Java is perfect if you are a narcissistic egomaniac with OCD who has to declare a data type for every variable. Back to python11 - - My experience with Visual Studio Code hasn't been nice and I'm honestly kind of not liking it at this point in time :-( Maybe it's just me who's being a dump-kop1 -! :)3 - Spent the last couple hours of the day trying to solve issue in my code and fuck I feel so dump after fixing it! A tiny tiny issue can fuck your code and fuck your brain!1 - - Scrum standups and standdowns at my university be like: "oh let's discuss everything that comes to our mind and talk about it for an hour" - Professors 2016 (Only ranted this because I had to take a huge dump and had to hold it for a hour ;_;) - Is this some fucking standard to provide a dump of your shitty xml api instead of just giving xsd file?3 - - TFW they decide to dump the enterprise application you have built from the bottom unto the wrong hands... - The moment when your users insist adding a number for custom "ordering" in their e-Commerce solution. Next thing you know is your PM appreciating the idea and thinks you're dump for not doing it.1 - I'm sitting waiting for the interview and I remembered to brain dump whatever here (secret: I dont know if I want this job!)2 - 3893 - -. :) - Starting a project for work and realized it would be a good idea to use a framework as by my initial anticipations i see this growing fairly complex. I choose to go with Angular 1.x because I'm lile "hey i know that already", but there's one teensie problem--it hit me that i haven't looked at Angular so long that I have no fucking clue how to start up an Angular project from scratch properly. Oh well, time to dump an old project in the public folder and figure this shit out one error at a time - Is anybody aware of some tecnique to install Visual Studio without having it to dump like 20Gb of useless garbage in unforeseeable locations of your hard drive?11 - Okay guys, after sleeping it over I decided that I didn't need to dump my entire stack of Java/mySQL and instead just slow the hell down on my development time. I'm going from Udemy to a book to help me be a better dev and this is a night and day difference as my book breaks every bit apart and explains it in a lot more depth than having a video walk me through it. What I wouldn't do without Amazon's Kindle service I tell ya...:) The only major thing I'm changing in this project is committing to one Javascript tool, REACT, as I need a simple tool to ease myself into learning Javascript. Wish me luck. :P Today I'm starting the project over, but this time breaking it down and going at better pace. Thanks for all the advice guys. :) ...I'm going to need a lot of Jack Daniels for this project aren't I?5 - My team is pretty small right now. It's myself and two other guys. One lead, who's been here for five years. A senior who we brought on 2 weeks ago. And me, a regular app dev. The lead put his two weeks in last week and has been trying to brain dump as much as he can onto us. I've been building a list of prioritization to compensate for when he leaves based on what he was saying was the most important. This list has gotten pretty massive after reviewing most of the processes in place. I was hired mainly to quell new requests coming in and not to maintain our systems, so that's what I did. I didn't examine our prod code base too closely. I wish I had. It's in a sorry state. I'm pretty sure I have about 2 years of tech debt for a crew of two guys constantly working on it. I've been trying to prioritize based on what gets the most bug fixes and change requests. These apps will see the biggest changes and will undergo the most maintenance. Since I'm just a regular app dev it feels weird trying to come up with this and try to prioritize this and come up with a plan. It feels like someone else should have. If it needs done then I guess it needs done. I need to be able to collaborate and work with my co worker and be able to plan for what projects are coming next. If anyone has any suggestions to tackle tech debt please make them. Or if there's any help for managing priorities in a different manner that may prove helpful I'm open. Honestly, I don't want to tackle this completely blind, it feels like a lot.2 - Fatal problem in weekly rant 4: Segmentation fault No further messages available Core was not dump for reasons unknow -.3 - Update: Having a dump on a train is the equivalent of sitting on a bucking bronco that vibrates enough for you to feel like you're 1" above the bowl7 - - - Does anyone here knows some efficiant way for stupid Broadcom wifi card to work efficiant on linux? Its Bcm43142. I recently transfered on Manjaro by suggestion of fellow ranters, but little that I knew or I wanted to forget from earlier experiences that Broadcom is bag of balls that noone wants and that it doesnt work correctly on any distro. I'm feeling like protagonist of that meme "C'mon, do something...". I really dont want to give up on linux once again cuz of dump wifi controller.7 - !dev When a process works better than expected but you were hoping that it only works as expected... USPS (mail service) is known for being crappy. I couldn't submit a temp address change via web bc I couldn't type my apartment unit # into their web form but a mail hold request where u manually just enter any address worked. So I was at my parents for a month, just got back yesterday. I put in a mail hold n before I left my apt, but expired on like Wednesday. So when I got back Saturday, I expected a huge mail dump but I couldn't find any mail... However last week I went to the local office and put in a Temp change of address bc there was a chance I'd go just to get the mail but not stay for other reasons... Got confirm letter that it would be effective like Saturday. I'm thinking it won't cover the mail held during the mail hold. Well apparently it did... So now all my mail is at my parents but I'm back in my apt... - So Google WiFi works great for the first week, now its taken a dump and barely getting any internet speed at all (if at all). I know its not my ISP considering that my desktop is getting 10 down1 - Any ideas as to what I can do with a 24 port network switch that I salvaged out of an electronics dump? It's only 100 Mbit though5 - -!! - Todays website fail: Fatal error: Call to a member function using_permalinks() on a non-object in /hermes/walnaweb11a/b2165/moo.hamradiosciencecom/hamradioscience/wp-includes/comment-template.php on line 771 -. - git commit -m "The test core dumps, I go home" && git push (OpenSSL is like running a marathon: It's just some month away and you already forgot how much fucking pain it was. Nah, can't have been that bad. Shit, it is.) - How the HELL someone develops a 'NEW' (essentially table layouts from the '90s) way of building layout with CSS and delivers this massive dump? Why can't I make a div expand to fill the remainder space in this layout?... Seriously... I need to wrap 10 divs inside each other to make a design behave correctly really like in the 90s? And the new kids on the block think this 'flexbox' is any good? Amazing sheeple... amazing. ADD MORE WRAPPERS! align-self should JUST WORK in the example above... but hey... it does not. I just want to be able to add/remove the sidebar and content, keeping the footer below and headers above. It's amazing the ammount of shims required to do anything in development on the frontend.5 - Started programming with asp classic. Will never forget the first time I ever managed to dump a post request on screen. Oh, the childhood of it... - Hm..favourite function.. Just before my apprenticeship as I used php more often, var_dump() was propably my favourite because it saved hours of my life :P2 - On Friday I talked about a program that I copied and pasted into my own, and when I was testing it, it didn't work like the original. If I kept it as the original, it wouldn't even move. If I tried to change it (taking the queries out of the loop. I know I know, I didn't make the original), it would give me a dump saying I was out of space. So, my solution: delete some records as I go. Can't wait to see the problems I'm about to have with this 🙃 - - I can't tell if I'm being clever or a dick here. When I can't be arsed writing a dB schema I switch hibernate to create mode, let it build the database, I then just dump the schema as SQL...2 - Obviously worst documentation is no documentation at all when having to interface with something proprietary (source code is kind of documentation). When you have to dump exported symbols and guess what could it do and how to call it. Luckily too old (and hopefully wise) for it now, sticking to opensource. -
https://devrant.com/search?term=dump
CC-MAIN-2020-40
refinedweb
7,137
81.63
Hello All, I'm working on my first Python plot in my programming class this week. My assignment is to write a program to plot data contained in a csv format. Here is the data: "Bowl number", "Date", "Bowl location", "Winner", "Number wings eaten" "I", "1/29/1993", "Wyndham Franklin Plaza Hotel", "Carmen 'The Beast From the East' Cordero", 100 "II","1/28/1994", "The Main Event", "Kevin 'Heavy Keavy' O"Donnell", 127 "III", "1/27/1995", "Club Egypt", "Kevin 'Heavy Keavy' O"Donnell""", 133 "IV","1/26/1996","Electric Factory","Glen 'Fluffmaster' Garrison", 155 "V", "1/24/1997", "Electric Factory", "Eric 'Gentleman E' Behl", 120 "VI","1/23/1998","Spectrum","Mark 'Big Rig' Vogeding", 164 "VII", "1/29/1999", "Spectrum", "Bill 'El Wingador' Simmons", 113 "VIII", "1/28/2000", "First Union Center","'Tollman Joe' Paul", 90 "IX","1/26/2001","First Union Center", "Bill 'El Wingador' Simmons", 137 "X", "2/1/2002", "First Union Center", "Bill 'El Wingador' Simmons", 135 "XI","1/2/2003","First Union Center", "Bill 'El Wingador' Simmons",154 "XII", "1/30/2004", "Wachovia Center", "Sonya 'The Black Widow' Thomas", 167 "XIII", "2/4/2005", "Wachovia Center", "Bill 'El Wingador' Simmons", 162 "XIV", "2/3/2006", "Wachovia Center", "Joey Chestnut", 173 "XV","2/2/2007","Wachovia Center", "Joey Chestnut",182 "XVI", "2/1/2008", "Wachovia Center", "Joey Chestnut",241 "XVII", "1/30/2009", "Wachovia Center", "Jonathan 'Super' Squibb",203 "XVIII", "2/5/2010", "Wachovia Center", "Cassandra 'Wings' Gillig", 238 "XIX", "2/4/2011", "Wells Fargo Center", "Jonathan 'Super' Squibb", 255 My x axis has to look like this "I ('93)" "II ('94)" etc. My y axis is just the number of wings. I have to keep my program like it is layed out for the assignemnt to be correct, but I'm getting an error "ValueError: x and y must have same first dimension" There are a few other kinks in the program that I can't figure out. This is my program so far: import matplotlib.pyplot as plot import numpy as np def make_line_plot(wing_bowl,wings_eaten): fig = plot.figure ( figsize =(20 ,5)) x_label_pos = range(len(wing_bowl)) plot.plot(x_label_pos, wings_eaten, color='blue', marker= 'o') plot.title('Wing Bowl Contest Results') plot.xlabel("Wing Bowl") plot.ylabel('Number of wings eaten') plot.grid('True') plot.xticks(x_label_pos,wing_bowl) plot.yticks(range(0,300,50)) plot.autoscale(enable=True, axis='x',tight=True) fig.autofmt_xdate() plot.savefig('wing-bowl-results.pdf') def main(): input_file=open('wing-bowl-data.csv') input_file.readline() lines=input_file.readlines() wingbowl=[] wings=[] for line in lines: split_line=line.strip().split(',') wingbowl+=split_line[0] wings+=[float(split_line[4])] make_line_plot(wingbowl,wings) main() Any help would be greatly appreciatated! Thanks, Joey
https://www.daniweb.com/software-development/python/threads/389978/plotting-help
CC-MAIN-2015-35
refinedweb
447
50.46
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. I want to allow a merge of a pullrequest only under a certain condition. I have written a script runner script for that: import com.atlassian.sal.api.component.ComponentLocator import com.atlassian.bitbucket.user.UserService def userService = ComponentLocator.getComponent(UserService) if (mergeRequest.pullRequest.pathsMatch("glob:**/spec/**") ) { mergeRequest.pullRequest.reviewers.findAll { it.approved }.any { userService.isUserInGroup(it.user, "electronics-lead") } } else { return true; } 1) This script should do a special condition only when files in a folder with name "spec" were touched, modified or deleted. Remark the else case with the `return true`. When the folder was touched some the additional condition jumps in, otherwise other scripts will run.. Is my approach above correct? 2) Where do I add this script? I see following options: a) "Script Merge Check" under "Require a number of approvers" b) "Script Event Handlers" under "Auto merge of pull request"; here we have already a script. You can do this using the following script in a custom merge check. Go to Admin -> Script Merge Checks -> Custom merge check and add the following script: import com.atlassian.bitbucket.user.UserService import com.atlassian.sal.api.component.ComponentLocator def userService = ComponentLocator.getComponent(UserService) if (mergeRequest.pullRequest.pathsMatch("glob:**/spec/**")) { def group = "electronics-lead" def approved = mergeRequest.pullRequest.reviewers.any { it.approved && userService.isUserInGroup(it.user, group) } if (! approved) { mergeRequest.veto("Pull request not approved", "Changes to spec folder must be approved by reviewer from $group group") } } Hope this helps. You can't do this from the repository settings menu. You have to set it up from the system admin section of ScriptRunner due to security issues. You can read about them here. Do you have appropriate access to do this? We have something in the pipeline that will make them available to project/repo admins in the future. When I click on "Custom merge check" nothing pops up, no edditing field. There seems to be a bug...don't know I get following error: "NetworkError: 500 Internal Server Error -" What version of Stash/Bitbucket are you using? 4.14.4 What version of ScriptRunner are you using? 4.3.14 4.3.14 is not compatible with Bitbucket Server 4.14.4. You should upgrade to ScriptRunner 5.0.6. You can check version compatibility here: Let us know how you get on after that. How can I perform the installation? There is no automatic upgrade button in bitbucket. So I have to download the *.jar file. How do I install this JAR file? There are instructions here for manually upgrading this: You can click download on the version here to get the JAR file. Okay, I upgrade now. What a struggle... Anyway, I added my script unter the admin pannel and "Sciript Merge Checks" and specified the repository. However, I don't understand the "Custom Merge Check" button under the project setting of the repository. What can I do with this? Can you take a screenshot of where you have added the script please? That's all you should need to do. I was just saying previously that the custom merge check item is not available from the repository settings script merge checks menu. It's only available to global admins. The thing is, only the administrator can create these kind of scripts. I thought I can create the script and project admins can enable it on the repository they like to. In script Merge Checks of the repository I can only choose between two scripts, which I find a little bit strange This is for security reasons as I linked you to here. You can write a script but not an inline script that the project admins can enable but it takes a bit of work. You can take a look at the scripts already available by unzipping the ScriptRunner JAR file and you can find one under: com/onresolve/scriptrunner/canned/bitbucket/mergechecks/RequiredApproversMergeCheck.groovy The @PerRepoCheck enables this. The file then needs to be placed under one of your script roots, explained here. We're working on exactly what you want in SRBITB-233, so keep an eye on that. Could you please clearly state what your requirement is. You have mentioned that when files in the folder "spec" are modified you want to return true. Does this mean you want to allow of prevent the merge? You've not mentioned what your requirement for approval by one reviewer from the "electronics-lead" group has on preventing/allowing the merge. I'll be able to give a more specific answer after that, along with some code to help you with this. when a file in the folder named "spec" was modified, then the merge should only be done, when a reviewer from the group "electronics-lead" has approved the pull request, otherwhise no merge should be possible. Does this clarify.
https://community.atlassian.com/t5/Bitbucket-questions/merge-pullrequest-under-certain-condition/qaq-p/594485
CC-MAIN-2018-51
refinedweb
829
58.99
IDM resumer is a simple, yet reliable tool that allows you to easily manipulate file transfers in Internet Download Manager. The program allows you to save the current download progress and resume the transfer at a later time or from a different computer. It is a suitable tool when you cannot wait for a download to finish. Reliable download assistant IDM resumer is a lightweight tool that you can use with Internet Download Manager, in order to safely interrupt file transfers off the Internet. The program can save the current progress, by identifying the segments of a file that are already stored on your computer. It can move them all in a single folder, that you can easily transfer to another station. IDM resumer does not require installation, so you may carry it around on a USB drive or similar memory device, along with the downloaded file segments. You may copy them onto a different system and resume the transfer in Internet Download Manager. Easily manage large file downloads IDM resumer is a useful solution for cases when you need to download large files, but you do not have the time to wait for the transfer to finish. In case you do not wish to lose the current progress, you may simply pause the process in Internet Download Manager, open IDM resumer and save the current download. The interrupted download is saved in a designated directory in the IDM resumer folder. Open the program again when you are ready to continue the process, identify the desired entry and click the Resume button. Quickly complete file transfers Once you clicked the Resume button in IDM resumer, the data is copied back into Internet Download Manager and you may easily start off the transfer from the point where you paused it. The program allows you to save the file download progress, in order to reduce the time required for the migration, the next time. IDM Resumer 2.9.1.3 Crack + Download X64 IDM resumer is a reliable and easy to use file manager for Internet Download Manager. The tool allows you to resume and save current downloads, in case you want to pause them for some reason. The software requires no installation, so you may use it anywhere. Download IDM resumer and take the chance to use it for free.Q: jquery ajax() textarea issue I am doing an ajax request using $.ajax() for a textarea. But the textarea seems not to accept it. If i input something in the textarea and press the button, it works fine. How can i make it work? My textarea: my button: Add My AJAX request: $(document).ready(function(){ $(‘#addCity’).click(function() { var addCity = $(‘#addCity’).html(); var state = $(‘#state’).val(); var city = $(‘#city’).val(); var location = $(‘#locationData’).val(); $.ajax({ type: “POST”, data: {location:location,state:state,city:city}, url: “addCity.php”, success: function(msg) { alert(msg); } }) }) }); It must be something to do with the textarea input. I’m sure it is something easy, but i can’t figure out what. A: You are assigning a value to the location property IDM Resumer 2.9.1.3 Crack + Torrent (Activation Code) Free For PC KEYMACRO enables you to control your favorite Windows shortcuts and commands with a mouse click. In other words, you can set a command to run whenever you click on a certain point. This way you can save time, and run certain tasks with just a few mouse clicks. Accessible Commands KEYMACRO offers a huge variety of commands, including record and playback shortcuts. Additionally, it supports many popular Windows shortcuts. This includes the ability to record a web address and paste the text into the address bar of your browser. Free Keyboard Shortcut Creator KEYMACRO Free Keyboard Shortcut Creator offers the power of KEYMACRO, but without the need to purchase it. It will automatically detect your current settings and create a keyboard shortcut for you. Just paste in the desired text and click the Generate button. The resulting shortcut is saved in your default clipboard. You can use this shortcut for any application, or even for a web page. Shortcut Wizard KEYMACRO Shortcut Wizard is a useful tool that allows you to customize your settings, to get the best results from the program. It allows you to create as many shortcuts as you wish, to further personalize your keyboard. Keyboard Shortcut Wizard Choose the number of shortcuts you wish to create. The more shortcuts you use, the better results you will get. Setting Options There are options to create shortcuts that run with the system keyboard, or a custom one. Additionally, you may choose to record in the system clipboard, to further customize the shortcut you are about to create. Show in list Create a shortcut that will show in a list. You can right-click it to edit its properties, or remove it. Codeloader Description: Codeloader is an indispensable tool that allows you to monitor your downloads in Internet Download Manager and automatically pause your downloads when they reach a certain size. This makes it possible for you to pause your downloads in IDM, use other tasks, and continue working on them at a later time. The program requires no installation. You can simply use the built-in scheduler to open the program when a certain download reaches a specific size. Multiple downloads Codeloader allows you to monitor multiple downloads at the same time, in case you want to pause several file transfers at once. This way, you can pause several downloads in IDM and continue working on them, from another PC. Manage multiple downloads IDM Resumer is a useful tool that 1d6a3396d6 IDM Resumer 2.9.1.3 With Serial Key This is a free download manager for Windows that allows you to resume file transfers, and store and manage data during the process of file download.The Internet is growing by leaps and bounds. When it was first started there were only about 50,000 sites on the Internet. There are more than 10 million, and it’s expected to grow to 500 million sites by the year 2001. The growth of the Internet has created a demand for a means for communicating with those sites. The current means of communicating with the Internet consists of a multitude of sites from personal computers and Internet access providers. However, this method is very costly, particularly for the personal computers and Internet access providers. For a provider, it is very costly to gain entry into the Internet through a gateway. In addition, only a few companies actually provide such a gateway. Furthermore, only a few of those companies actually provide their users access to the Internet. As a result, most personal computer users and Internet access providers are excluded from accessing the Internet through the gateways that are currently provided. In addition, a personal computer is usually required. It is therefore desirable to provide a gateway to the Internet to enable personal computer users and Internet access providers to communicate with those sites on the Internet. In addition, it is desirable to provide a gateway to the Internet for Internet access providers and other computer users to communicate with sites on the Internet. It is desirable to provide a gateway that enables a user to communicate with the Internet using a mobile communication device. It is also desirable to provide a gateway that is simple and inexpensive. It is also desirable to provide a gateway that allows users to place calls on the telephone system without paying for the service.Q: getService() function in J2ME I have some problem with the J2ME application. I try to make some system that I can use to update data in my database from phone and return updated data. I created new class in PhoneApp and call it as: void someFunction() { try { Class cls = Class.forName(“class.package.someClass”); Object obj = cls.newInstance(); Intent serviceIntent = new Intent(); What’s New in the IDM Resumer? The downloading part is safe and accurate. It supports batch mode. This is a useful tool when you need to download large files. If you want to interrupt the download and resume it from the point where you paused it, IDM resumer can help. Google and Citrix team up to let you get your apps on Chromebooks – rubicon33 ====== veridies Citrix works in ChromeOS’s webview, just like the Windows app that allows for dual-mode. It seems like this one is a lot more specific to ChromeOS, which is somewhat unhelpful to people outside the ecosystem. ~~~ sgman Could you please expand on what they mean by “works in webview”? It’s ok to just not understand exactly what they mean, but does anyone really want to try and figure out whether it just means it doesn’t break any traditional browsers (IE, FF, etc.) or if they are taking advantage of some pervasive ChromeOS functionality that we aren’t aware of. Q: Can I call an input cell from another worksheet? I want to open a file and save the input from the cells in the same worksheet that was opened. Is there any way to open a file, say worksheet 1, and have that workbook run through each cell and save the input into another workbook, say worksheet 2? A: Assuming both worksheets are in the same workbook, and if you are just trying to copy data from one sheet to another, you should be able to use the Cells.Copy method, like so: Dim wb As Workbook Set wb = ThisWorkbook Set ws = wb.Sheets(“Sheet1”) ws.Cells.Copy wb.Sheets(“Sheet2”).Range(“A1”) I would suggest that you read the Copying Data and Arranging Data chapters of the Excel VBA Help for more information on using Copy in this way. // Licensed under the MIT License. See LICENSE in the project root for license information. namespace StyleCop.Analyzers.Test.TestCategory { using System; /// /// Tests that assume the ‘this’ type is appropriate /// internal class ThisAssumption : StyleCop.Analyzers.Test.TestCategory.This System Requirements: Windows XP Home Windows XP Professional Windows Vista Home Premium Windows Vista Business Windows Vista Home Basic Windows Vista Home Premium 64-bit Windows 7 Home Premium Windows 7 Professional Windows 7 Ultimate Windows 8 Pro 64-bit Windows 8 Home 64-bit Windows 8 Pro Tablet (Hardware-enabled) Windows 8 Home Tablet (Hardware-enabled) Windows 8 Tablet (Non-Hardware-enabled) Windows 8 Pro Tablet (Non
https://epkrd.com/idm-resumer-2-9-1-3-crack-free-download-2022/
CC-MAIN-2022-27
refinedweb
1,735
62.98
Code should execute sequentially if run in a Jupyter notebook - See the set up page to install Jupyter, Python and all necessary libraries - Please direct feedback to contact@quantecon.org or the discourse forum Co-authors: Chase Coleman In addition to what’s in Anaconda, this lecture will need the following libraries: !pip install --upgrade quantecon !pip install interpolation Overview¶ This lecture describes a statistical decision problem encountered by Milton Friedman and W. Allen Wallis during World War II when they were analysts at the U.S. Government’s Statistical Research Group at Columbia University. This problem led Abraham Wald [Wal47] to formulate sequential analysis, an approach to statistical decision problems intimately related to dynamic programming. In this lecture, we apply dynamic programming algorithms to Friedman and Wallis and Wald’s problem. Key ideas in play will We’ll begin with some imports import numpy as np import matplotlib.pyplot as plt from numba import njit, prange, vectorize from interpolation import interp from math import gamma Origin of the Problem¶ On pages 137-139 of his 1998 book Two Lucky People with Rose Friedman [FF98], Milton Friedman described a problem presented to him and Allen Wallis during World War II, when they worked at the US Government’s Statistical Research Group at Columbia University. Let’s listen to Milton Friedman tell us inferior the new method is obviously inferior or because it is obviously superior beyond what was hoped for $ \ldots $. Friedman and Wallis struggled with the problem but, after realizing that they were not able to solve it, described the problem to Abraham Wald. That started Wald on the path that led him to Sequential Analysis [Wal47]. We’ll formulate the problem using dynamic programming. A Dynamic Programming Approach¶ The following presentation of the problem closely follows Dmitri Berskekas’s treatment in Dynamic Programming and Stochastic Control [Ber75]. A decision-maker observes IID draws of a random variable $ z $. He (or she) wants to know which of two probability distributions $ f_0 $ or $ f_1 $ governs $ z $. After a number of draws, also to be determined, he makes a decision as to which of the distributions is generating the draws he observes. He starts with prior$$ \pi_{-1} = \mathbb P \{ f = f_0 \mid \textrm{ no observations} \} \in (0, 1) $$ After observing $ k+1 $ observations $ z_k, z_{k-1}, \ldots, z_0 $, he updates this value to$$ \pi_k = \mathbb P \{ f = f_0 \mid z_k, z_{k-1}, \ldots, z_0 \} $$ which is calculated recursively by applying Bayes’ law:$$ \pi_{k+1} = \frac{ \pi_k f_0(z_{k+1})}{ \pi_k f_0(z_{k+1}) + (1-\pi_k) f_1 (z_{k+1}) }, \quad k = -1, 0, 1, \ldots $$ After observing $ z_k, z_{k-1}, \ldots, z_0 $, the decision-maker believes that $ z_{k+1} $ has probability distribution$$ f_{{\pi}_k} (v) = \pi_k f_0(v) + (1-\pi_k) f_1 (v) $$ This is a mixture of distributions $ f_0 $ and $ f_1 $, with the weight on $ f_0 $ being the posterior probability that $ f = f_0 $ [1]. To help illustrate this kind of distribution, let’s inspect some mixtures of beta distributions. The density of a beta probability distribution with parameters $ a $ and $ b $ is$$ f(z; a, b) = \frac{\Gamma(a+b) z^{a-1} (1-z)^{b-1}}{\Gamma(a) \Gamma(b)} \quad \text{where} \quad \Gamma(t) := \int_{0}^{\infty} x^{t-1} e^{-x} dx $$ The next figure shows two beta distributions in the top panel. The bottom panel presents mixtures of these distributions, with various mixing probabilities $ \pi_k $ def beta_function_factory(a, b): @vectorize def p(x): r = gamma(a + b) / (gamma(a) * gamma(b)) return r * x**(a-1) * (1 - x)**(b-1) @njit def p_rvs(): return np.random.beta(a, b) return p, p_rvs f0, _ = beta_function_factory(1, 1) f1, _ = beta_function_factory(9, 9) grid = np.linspace(0, 1, 50) fig, axes = plt.subplots(2, figsize=(10, 8)) axes[0].set_title("Original Distributions") axes[0].plot(grid, f0(grid), lw=2, label="$f_0$") axes[0].plot(grid, f1(grid), lw=2, label="$f_1$") axes[1].set_title("Mixtures") for π in 0.25, 0.5, 0.75: y = π * f0(grid) + (1 - π) * f1(grid) axes[1].plot(y, lw=2, label=f"$\pi_k$ = {π}") for ax in axes: ax.legend() ax.set(xlabel="$z$ values", ylabel="probability of $z_k$") plt.tight_layout() plt.show() Losses and Costs¶ After observing $ z_k, z_{k-1}, \ldots, z_0 $, the decision-maker chooses among three distinct actions: - He decides that $ f = f_0 $ and draws no more $ z $’s - He decides that $ f = f_1 $ and draws no more $ z $’s - He postpones deciding now and instead chooses to draw a $ z_{k+1} $ Associated with these three actions, the decision-maker can suffer three kinds of losses: - A loss $ L_0 $ if he decides $ f = f_0 $ when actually $ f=f_1 $ - A loss $ L_1 $ if he decides $ f = f_1 $ when actually $ f=f_0 $ - A cost $ c $ if he postpones deciding and chooses instead to draw another $ z $ Digression on Type I and Type II Errors¶ If we regard $ f=f_0 $ as a null hypothesis and $ f=f_1 $ as an alternative hypothesis, then $ L_1 $ and $ L_0 $ are losses associated with two types of statistical errors - a type I error is an incorrect rejection of a true null hypothesis (a “false positive”) - a type II error is a failure to reject a false null hypothesis (a “false negative”) So when we treat $ f=f_0 $ as the null hypothesis - We can think of $ L_1 $ as the loss associated with a type I error. - We can think of $ L_0 $ as the loss associated with a type II error. Intuition¶ Let’s try to guess what an optimal decision rule might look like before we go further. Suppose at some given point in time that $ \pi $ is close to 1. Then our prior beliefs and the evidence so far point strongly to $ f = f_0 $. If, on the other hand, $ \pi $ is close to 0, then $ f = f_1 $ is strongly favored. Finally, if $ \pi $ is in the middle of the interval $ [0, 1] $, then we have little information in either direction. This reasoning suggests a decision rule such as the one shown in the figure As we’ll see, this is indeed the correct form of the decision rule. The key problem is to determine the threshold values $ \alpha, \beta $, which will depend on the parameters listed above. You might like to pause at this point and try to predict the impact of a parameter such as $ c $ or $ L_0 $ on $ \alpha $ or $ \beta $. A Bellman Equation¶ Let $ J(\pi) $ be the total loss for a decision-maker with current belief $ \pi $ who chooses optimally. With some thought, you will agree that $ J $ should satisfy the Bellman equation $$ J(\pi) = \min \left\{ (1-\pi) L_0, \; \pi L_1, \; c + \mathbb E [ J (\pi') ] \right\} \tag{1} $$ where $ \pi' $ is the random variable defined by$$ \pi' = \kappa(z', \pi) = \frac{ \pi f_0(z')}{ \pi f_0(z') + (1-\pi) f_1 (z') } $$ when $ \pi $ is fixed and $ z' $ is drawn from the current best guess, which is the distribution $ f $ defined by$$ f_{\pi}(v) = \pi f_0(v) + (1-\pi) f_1 (v) $$ In the Bellman equation, minimization is over three actions: - Accept the hypothesis that $ f = f_0 $ - Accept the hypothesis that $ f = f_1 $ - Postpone deciding and draw again We can represent the Bellman equation as $$ J(\pi) = \min \left\{ (1-\pi) L_0, \; \pi L_1, \; h(\pi) \right\} \tag{2} $$ where $ \pi \in [0,1] $ and - $ (1-\pi) L_0 $ is the expected loss associated with accepting $ f_0 $ (i.e., the cost of making a type II error). - $ \pi L_1 $ is the expected loss associated with accepting $ f_1 $ (i.e., the cost of making a type I error). - $ h(\pi) := c + \mathbb E [J(\pi')] $ the continuation value; i.e., the expected cost associated with drawing one more $ z $. The optimal decision rule is characterized by two numbers $ \alpha, \beta \in (0,1) \times (0,1) $ that satisfy$$ (1- \pi) L_0 < \min \{ \pi L_1, c + \mathbb E [J(\pi')] \} \textrm { if } \pi \geq \alpha $$ and$$ \pi L_1 < \min \{ (1-\pi) L_0, c + \mathbb E [J(\pi')] \} \textrm { if } \pi \leq \beta $$ The optimal decision rule is then$$ \begin{aligned} \textrm { accept } f=f_0 \textrm{ if } \pi \geq \alpha \\ \textrm { accept } f=f_1 \textrm{ if } \pi \leq \beta \\ \textrm { draw another } z \textrm{ if } \beta \leq \pi \leq \alpha \end{aligned} $$ Our aim is to compute the value function $ J $, and from it the associated cutoffs $ \alpha $ and $ \beta $. To make our computations simpler, using (2), we can write the continuation value $ h(\pi) $ as $$ \begin{aligned} h(\pi) &= c + \mathbb E [J(\pi')] \\ &= c + \mathbb E_{\pi'} \min \{ (1 - \pi') L_0, \pi' L_1, h(\pi') \} \\ &= c + \int \min \{ (1 - \kappa(z', \pi) ) L_0, \kappa(z', \pi) L_1, h(\kappa(z', \pi) ) \} f_\pi (z') dz' \end{aligned} \tag{3} $$ The equality $$ h(\pi) = c + \int \min \{ (1 - \kappa(z', \pi) ) L_0, \kappa(z', \pi) L_1, h(\kappa(z', \pi) ) \} f_\pi (z') dz' \tag{4} $$ can be understood as a functional equation, where $ h $ is the unknown. Using the functional equation, (4), for the continuation value, we can back out optimal choices using the RHS of (2). This functional equation can be solved by taking an initial guess and iterating to find the fixed point. In other words, we iterate with an operator $ Q $, where$$ Q h(\pi) = c + \int \min \{ (1 - \kappa(z', \pi) ) L_0, \kappa(z', \pi) L_1, h(\kappa(z', \pi) ) \} f_\pi (z') dz' $$ class WaldFriedman: def __init__(self, c=1.25, # Cost of another draw a0=1, b0=1, a1=3, b1=1.2, L0=25, # Cost of selecting f0 when f1 is true L1=25, # Cost of selecting f1 when f0 is true π_grid_size=200, mc_size=1000): self.c, self.π_grid_size = c, π_grid_size self.L0, self.L1 = L0, L1 self.π_grid = np.linspace(0, 1, π_grid_size) self.mc_size = mc_size # Set up distributions self.f0, self.f0_rvs = beta_function_factory(a0, b0) self.f1, self.f1_rvs = beta_function_factory(a1, b1) self.z0 = np.random.beta(a0, b0, mc_size) self.z1 = np.random.beta(a1, b1, mc_size) As in the optimal growth lecture, to approximate a continuous value function - We iterate at a finite grid of possible values of $ \pi $. - When we evaluate $ \mathbb E[J(\pi')] $ between grid points, we use linear interpolation. The function operator_factory returns the operator Q def operator_factory(wf, parallel_flag=True): """ Returns a jitted version of the Q operator. * wf is an instance of the WaldFriedman class """ c, π_grid = wf.c, wf.π_grid L0, L1 = wf.L0, wf.L1 f0, f1 = wf.f0, wf.f1 z0, z1 = wf.z0, wf.z1 mc_size = wf.mc_size @njit @njit(parallel=parallel_flag) def Q(h): h_new = np.empty_like(π_grid) h_func = lambda p: interp(π_grid, h, p) for i in prange(len(π_grid)): π = π_grid[i] # Find the expected value of J by integrating over z integral_f0, integral_f1 = 0, 0 for m in range(mc_size): π_0 = κ(z0[m], π) # Draw z from f0 and update π integral_f0 += min((1 - π_0) * L0, π_0 * L1, h_func(π_0)) π_1 = κ(z1[m], π) # Draw z from f1 and update π integral_f1 += min((1 - π_1) * L0, π_1 * L1, h_func(π_1)) integral = (π * integral_f0 + (1 - π) * integral_f1) / mc_size h_new[i] = c + integral return h_new return Q To solve the model, we will iterate using Q to find the fixed point def solve_model(wf, use_parallel=True, tol=1e-4, max_iter=1000, verbose=True, print_skip=25): """ Compute the continuation value function * wf is an instance of WaldFriedman """ Q = operator_factory(wf, parallel_flag=use_parallel) # Set up loop h = np.zeros(len(wf.π_grid)) i = 0 error = tol + 1 while i < max_iter and error > tol: h_new = Q(h) error = np.max(np.abs(h - h_new)) i += 1 if verbose and i % print_skip == 0: print(f"Error at iteration {i} is {error}.") h = h_new if i == max_iter: print("Failed to converge!") if verbose and i < max_iter: print(f"\nConverged in {i} iterations.") return h_new wf = WaldFriedman() fig, ax = plt.subplots(figsize=(10, 6)) ax.plot(wf.f0(wf.π_grid), label="$f_0$") ax.plot(wf.f1(wf.π_grid), label="$f_1$") ax.set(ylabel="probability of $z_k$", xlabel="$k$", title="Distributions") ax.legend() plt.show() h_star = solve_model(wf) # Solve the model Error at iteration 25 is 8.802680267905316e-05. Converged in 25 iterations. We will also set up a function to compute the cutoffs $ \alpha $ and $ \beta $ and plot these on our value function plot def find_cutoff_rule(wf, h): """ This function takes a continuation value function and returns the corresponding cutoffs of where you transition between continuing and choosing a specific model """ π_grid = wf.π_grid L0, L1 = wf.L0, wf.L1 # Evaluate cost at all points on grid for choosing a model payoff_f0 = (1 - π_grid) * L0 payoff_f1 = π_grid * L1 # The cutoff points can be found by differencing these costs with # The Bellman equation (J is always less than or equal to p_c_i) β = π_grid[np.searchsorted(payoff_f1 - np.minimum(h, payoff_f0), 1e-10) - 1] α = π_grid[np.searchsorted(np.minimum(h, payoff_f1) - payoff_f0, 1e-10) - 1] return (β, α) β, α = find_cutoff_rule(wf, h_star) cost_L0 = (1 - wf.π_grid) * wf.L0 cost_L1 = wf.π_grid * wf.L1 fig, ax = plt.subplots(figsize=(10, 6)) ax.plot(wf.π_grid, h_star, label='continuation value') ax.plot(wf.π_grid, cost_L1, label='choose f1') ax.plot(wf.π_grid, cost_L0, label='choose f0') ax.plot(wf.π_grid, np.amin(np.column_stack([h_star, cost_L0, cost_L1]),axis=1), lw=15, alpha=0.1, color='b', label='minimum cost') ax.annotate(r"$\beta$", xy=(β + 0.01, 0.5), fontsize=14) ax.annotate(r"$\alpha$", xy=(α + 0.01, 0.5), fontsize=14) plt.vlines(β, 0, β * wf.L0, linestyle="--") plt.vlines(α, 0, (1 - α) * wf.L1, linestyle="--") ax.set(xlim=(0, 1), ylim=(0, 0.5 * max(wf.L0, wf.L1)), ylabel="cost", xlabel="$\pi$", title="Value function") plt.legend(borderpad=1.1) plt.show() The value function equals $ \pi L_1 $ for $ \pi \leq \beta $, and $ (1-\pi )L_0 $ for $ \pi \geq \alpha $. The slopes of the two linear pieces of the value function are determined by $ L_1 $ and $ - L_0 $. The value function is smooth in the interior region, where the posterior probability assigned to $ f_0 $ is in the indecisive region $ \pi \in (\beta, \alpha) $. The decision-maker continues to sample until the probability that he attaches to model $ f_0 $ falls below $ \beta $ or above $ \alpha $. Simulations¶ The next figure shows the outcomes of 500 simulations of the decision process. On the left is a histogram of the stopping times, which equal the number of draws of $ z_k $ required to make a decision. The average number of draws is around 6.6. On the right is the fraction of correct decisions at the stopping time. In this case, the decision-maker is correct 80% of the time def simulate(wf, true_dist, h_star, π_0=0.5): """ This function takes an initial condition and simulates until it stops (when a decision is made) """ f0, f1 = wf.f0, wf.f1 f0_rvs, f1_rvs = wf.f0_rvs, wf.f1_rvs π_grid = wf.π_grid if true_dist == "f0": f, f_rvs = wf.f0, wf.f0_rvs elif true_dist == "f1": f, f_rvs = wf.f1, wf.f1_rvs # Find cutoffs β, α = find_cutoff_rule(wf, h_star) # Initialize a couple of useful variables decision_made = False π = π_0 t = 0 while decision_made is False: # Maybe should specify which distribution is correct one so that # the draws come from the "right" distribution z = f_rvs() t = t + 1 π = κ(z, π) if π < β: decision_made = True decision = 1 elif π > α: decision_made = True decision = 0 if true_dist == "f0": if decision == 0: correct = True else: correct = False elif true_dist == "f1": if decision == 1: correct = True else: correct = False return correct, π, t def stopping_dist(wf, h_star, ndraws=250, true_dist="f0"): """ Simulates repeatedly to get distributions of time needed to make a decision and how often they are correct """ tdist = np.empty(ndraws, int) cdist = np.empty(ndraws, bool) for i in range(ndraws): correct, π, t = simulate(wf, true_dist, h_star) tdist[i] = t cdist[i] = correct return cdist, tdist def simulation_plot(wf): h_star = solve_model(wf) ndraws = 500 cdist, tdist = stopping_dist(wf, h_star, ndraws) fig, ax = plt.subplots(1, 2, figsize=(16, 5)) ax[0].hist(tdist, bins=np.max(tdist)) ax[0].set_title(f"Stopping times over {ndraws} replications") ax[0].set(xlabel="time", ylabel="number of stops") ax[0].annotate(f"mean = {np.mean(tdist)}", xy=(max(tdist) / 2, max(np.histogram(tdist, bins=max(tdist))[0]) / 2)) ax[1].hist(cdist.astype(int), bins=2) ax[1].set_title(f"Correct decisions over {ndraws} replications") ax[1].annotate(f"% correct = {np.mean(cdist)}", xy=(0.05, ndraws / 2)) plt.show() simulation_plot(wf) Error at iteration 25 is 8.802680267905316e-05. Converged in 25 iterations. wf = WaldFriedman(c=2.5) simulation_plot(wf) Converged in 14 iterations. Increased cost per draw has induced the decision-maker to take less draws before deciding. Because he decides with less, the percentage of time he is correct drops. This leads to him having a higher expected loss when he puts equal weight on both models. A Notebook Implementation¶ To facilitate comparative statics, we provide a Jupyter notebook that generates the same plots, but with sliders. With these sliders, you can adjust parameters and immediately observe - effects on the smoothness of the value function in the indecisive middle range as we increase the number of grid points in the piecewise linear approximation. - effects of different settings for the cost parameters $ L_0, L_1, c $, the parameters of two beta distributions $ f_0 $ and $ f_1 $, and the number of points and linear functions $ m $ to use in the piece-wise continuous approximation to the value function. - various simulations from $ f_0 $ and associated distributions of waiting times to making a decision. - associated histograms of correct and incorrect decisions. Comparison with Neyman-Pearson Formulation¶ For several reasons, it is useful to describe the theory underlying the [Wal47] $ \beta $ and $ \alpha $ characterize cut-off rules used to determine $ n $ as a random variable. - Laws of large numbers make no appearances in the sequential construction. In chapter 1 of Sequential Analysis [Wal47] limiting what is unknown, Wald uses the following simple structure to illustrate the main ideas: - A decision-maker wants to decide which of two distributions $ f_0 $, $ f_1 $ govern an IID random variable $ z $. - The null hypothesis $ H_0 $ is the statement that $ f_0 $ governs the data. - The alternative hypothesis $ H_1 $ is the statement that $ f_1 $ governs the data. - The problem is to devise and analyze a test of hypothesis $ H_0 $ against the alternative hypothesis $ H_1 $ on the basis of a sample of a fixed number $ n $ independent observations $ z_1, z_2, \ldots, z_n $ of the random variable $ z $. To quote Abraham Wald, A test procedure leading to the acceptance or rejection of the [null] hypothesis in question is simply a rule specifying, for each possible sample of size $ n $, whether the [null] hypothesis should be accepted or rejected on the basis of the sample. This may also be expressed as follows: A test procedure is simply a subdivision of the totality of all possible samples of size $ n $ into two mutually exclusive parts, say part 1 and part 2, together with the application of the rule that the [null] hypothesis be accepted if the observed sample is contained in part 2. Part 1 is also called the critical region. Since part 2 is the totality of all samples of size $ n $ which are not included in part 1, part 2 is uniquely determined by part 1. Thus, choosing a test procedure is equivalent to determining a critical region. Let’s listen to Wald longer: As a basis for choosing among critical regions the following considerations have been advanced by Neyman and Pearson: In accepting or rejecting $ H_0 $ we may commit errors of two kinds. We commit an error of the first kind if we reject $ H_0 $ when it is true; we commit an error of the second kind if we accept $ H_0 $ when $ H_1 $ is true. After a particular critical region $ W $ has been chosen, the probability of committing an error of the first kind, as well as the probability of committing an error of the second kind is uniquely determined. The probability of committing an error of the first kind is equal to the probability, determined by the assumption that $ H_0 $ is true, that the observed sample will be included in the critical region $ W $. The probability of committing an error of the second kind is equal to the probability, determined on the assumption that $ H_1 $ is true, that the probability will fall outside the critical region $ W $. For any given critical region $ W $ we shall denote the probability of an error of the first kind by $ \alpha $ and the probability of an error of the second kind by $ \beta $. Let’s listen carefully to how Wald applies law of large numbers to interpret $ \alpha $ and $ \beta $: The probabilities $ \alpha $ and $ \beta $ have the following important practical interpretation: Suppose that we draw a large number of samples of size $ n $. Let $ M $ be the number of such samples drawn. Suppose that for each of these $ M $ samples we reject $ H_0 $ if the sample is included in $ W $ and accept $ H_0 $ if the sample lies outside $ W $. In this way we make $ M $ statements of rejection or acceptance. Some of these statements will in general be wrong. If $ H_0 $ is true and if $ M $ is large, the probability is nearly $ 1 $ (i.e., it is practically certain) that the proportion of wrong statements (i.e., the number of wrong statements divided by $ M $) will be approximately $ \alpha $. If $ H_1 $ is true, the probability is nearly $ 1 $ that the proportion of wrong statements will be approximately $ \beta $. Thus, we can say that in the long run [ here Wald applies law of large numbers by driving $ M \rightarrow \infty $ (our comment, not Wald’s) ] the proportion of wrong statements will be $ \alpha $ if $ H_0 $is true and $ \beta $ if $ H_1 $ is true. The quantity $ \alpha $ is called the size of the critical region, and the quantity $ 1-\beta $ is called the power of the critical region. Wald notes that one critical region $ W $ is more desirable than another if it has smaller values of $ \alpha $ and $ \beta $. Although either $ \alpha $ or $ \beta $ can be made arbitrarily small by a proper choice of the critical region $ W $, it is possible to make both $ \alpha $ and $ \beta $ arbitrarily small for a fixed value of $ n $, i.e., a fixed sample size. Wald summarizes Neyman and Pearson’s setup as follows: Neyman and Pearson show that a region consisting of all samples $ (z_1, z_2, \ldots, z_n) $ which satisfy the inequality$$ \frac{ f_1(z_1) \cdots f_1(z_n)}{f_0(z_1) \cdots f_0(z_n)} \geq k $$ is a most powerful critical region for testing the hypothesis $ H_0 $ against the alternative hypothesis $ H_1 $. The term $ k $ on the right side is a constant chosen so that the region will have the required size $ \alpha $. Wald goes on to discuss Neyman and Pearson’s concept of uniformly most powerful test. Here is how Wald introduces the notion of a sequential test A decision is made. If the first or second decision is made, the process is terminated. If the third decision is made, a second trial is performed. Again, on the basis of the first two observations, one of the three decision.
https://lectures.quantecon.org/py/wald_friedman.html
CC-MAIN-2019-35
refinedweb
3,884
52.6
.. However, these message-passing APIs have some significant drawbacks. Most algorithms require considerable design and modifications to optimize them for parallel computing. And different processor speeds of various nodes in the cluster can also make it difficult to re-code an algorithm: if the slowest node causes others to wait for it to complete its portion of an algorithm, then the cluster is not being optimally utilized. Fortunately, there’s another way to allocate tasks in a cluster of disparate processors that does not require you to rethink your algorithms. It’s called MOSIX (), the Multicomputer Operating System for UnIX. It was developed by Professor Amnon Barak and collaborators at the Hebrew University of Jerusalem in Israel. MOSIX: What’s It Good For? MOSIX is a set of kernel patches and user programs (for Linux and other UNIX variants) that perform automatic and transparent process migration for clustered and even non-clustered computers. By moving CPU-intensive processes to faster or less-busy cluster nodes or workstations, MOSIX provides load balancing across distributed systems in a fashion similar to that used in multi-processor computers or symmetric multi-processing (SMP) systems. Adaptive management algorithms monitor and respond to uneven distribution of CPU resources. After a new process is created on a node, MOSIX attempts to make the best use of available cycles by reassigning or migrating the process, as necessary, to the best available node. To maximize overall performance, MOSIX continually monitors and reassigns processes as the distribution of processing load on nodes changes. MOSIX evenly distributes the load generated by many serial processes or by forked or threaded processes that don’t use shared memory. It’s scalable and can support large numbers of cluster nodes and/or workstations. It works on top of TCP and UDP using existing network infrastructure and has minimal impact on other parts of the kernel. Moreover, a user-level package is now available that distributes workload among nodes without the overhead of the process migration mechanism. MOSIX also supports a file system called MFS (the MOSIX File System) that allows every node access to the file systems on every other node. (The effect is like using NFS to mount every file system of every node on every other node.) While MOSIX is of little use in an environment where explicit message passing programs are run on dedicated nodes, it’s very effective in a cluster that runs many independent serial simulations or application programs. A computing task can be initiated on any node in a MOSIX cluster and the workload will be distributed among all available nodes. This minimizes run time and is particularly useful in a cluster containing a variety of processors. MOSIX considers both the speed of the processor and the current load on the machine when deciding how to migrate processes. Process migration can be invaluable in a university department or research laboratory that’s accumulated a number of computers of varying ages and speeds. If the department owns one or two beefy servers and many smaller desktops and older servers, MOSIX migrates jobs to run on the best available computer given the other tasks being performed on the cluster. Moreover, MFS provides an easy way to access files and data spread across many machines. MOSIX can also be used on scalable Web servers that require a lot of processing power to service requests. By forming clusters and distributing the processing among all available servers, the client’s request is answered in the shortest possible time (although there is some time, arguably negligible, lost with process migration). Although MOSIX is very good for handling CPU-bound problems, it’s less efficient for I/O-intensive problems. File and network I/O occurs on the node where the task was initiated, so a migrated process must contact its “home” node to perform these operations. MOSIX’s Direct File-System Access (DFSA) attempts to solve this problem by allowing remote processes to save time by performing most of their I/O and file system operations directly on the node where the process is currently running instead of having to “phone home.” At present, DFSA works only with MFS file systems. Getting Started with MOSIX Installing MOSIX involves patching and building a new kernel, compiling and installing user-level programs, and making minor modifications to a number of system files so that certain daemons aren’t migrated away. Luckily, the MOSIX distribution contains pre-built Perl scripts that perform these tasks automatically. A list of minimal requirements for MOSIX is provided in the MOSIX README file along with instructions for manual installation should the Perl scripts fail to do the job. To demonstrate the features of MOSIX, we’re using a cluster of five older computers with different CPU speeds all running Red Hat 7.2, as shown in Table One. Table One: Mosix testbed cluster Machine Hostname Processor 1 mosix1 Pentium II 450MHz 2 mosix2 Pentium 166 3 mosix3 Pentium 133 4 mosix4 Pentium 120 5 mosix5 Pentium 90 We downloaded MOSIX version 1.5.7 and the source code for kernel version 2.4.17. The kernel sources were patched, and a compatible kernel was built on the first machine and then copied onto the other nodes. We used the Perl scripts provided with MOSIX to build and install the user-level programs and to alter system files appropriately. We chose to use the GRUB boot loader (instead of LILO) on all five systems in the cluster and we found that the Perl scripts didn’t always make the correct modifications to the grub.conf file. The grub.conf files were updated manually. Finally, we used grub-install to update the master boot record on each node. (For more on GRUB see the April 2002 Guru Guidance column, available online at.) The file /etc/mosix.map was created (and must reside on every node) to map nodes into the MOSIX cluster. Since our five machines have contiguous IP addresses, only a single entry is needed in the file, as shown in Listing One. If the nodes in your cluster have widely scattered IP addresses, you’ll need a line for each node, with the number “1” as the third argument (e.g., 1 hosta 1, 2 hostb 1, etc.). Listing One: /etc/mosix.map # MOSIX CONFIGURATION # =================== # Each line should contain 3 fields, # mapping IP addresses to MOSIX node-numbers: # 1) first MOSIX node-number in range. # 2) IP address or host name of the above node # 3) number of nodes in this range. # # MOSIX-# IP/hostname number-of-nodes # ============================ 1 mosix1 5 With the new kernels in place, the user programs installed, and the systems files modified, all systems were rebooted. After rebooting, all computers appeared as nodes in the MOSIX cluster. Node 4, a Pentium 120, suffered a catastrophic disk failure wholly unrelated to the MOSIX installation and was subsequently unavailable for the rest of our tests. Testing MOSIX’s Process Migration To test automatic process migration, we developed a short program in C that wastes CPU time called time-waster (see Listing Two). This code executes a nested loop and calculates a useless value. After every tenth pass through the outer loop, timing information is printed. Listing Two: time-waster.c #include <stdio.h> #include <time.h> #include <sys/types.h> int main(int argc, char **argv) { int i, j, elapse = 0, prev_elapse; double val; time_t ts; ts = time((time_t *)NULL); for (i = 0; i < 101; i++) { for (j = 0; j < 9999999; j++) val = (double)(j+1) / (double)(i+1); if (!(i%10)) { prev_elapse = elapse; elapse = (int)(time((time_t *)NULL) - ts); printf( “i=%d, val=%lg, %d s elapsed, %d s since last printn”, i, val, elapse, elapse-prev_elapse); } } return 0; } Figure One shows the results of compiling and running this code on mosix1, the fastest box in our cluster. The program takes approximately 81 seconds to run on mosix1 with about 8 seconds spent in each of the ten passes through the outer loop. Figure Two shows the results of executing time-waster on mosix2, our Pentium 166, without process migration. The mosrun program allows node-allocation preferences to be established for executing a command. In this case, the -h flag (for “home”) is used to force time-waster to run on the home node (i.e., not migrated). It takes time-waster approximately 365 seconds to complete. Figure One: Output from time-waster on mosix1 [forrest@mosix1 forrest]$ cc -o time-waster time-waster.c [forrest@mosix1 forrest]$ ./time-waster i=0, val=1e+07, 1 s elapsed, 1 s since last print i=10, val=909091, 8 s elapsed, 7 s since last print i=20, val=476190, 16 s elapsed, 8 s since last print i=30, val=322581, 24 s elapsed, 8 s since last print i=40, val=243902, 32 s elapsed, 8 s since last print i=50, val=196078, 40 s elapsed, 8 s since last print i=60, val=163934, 48 s elapsed, 8 s since last print i=70, val=140845, 56 s elapsed, 8 s since last print i=80, val=123457, 64 s elapsed, 8 s since last print i=90, val=109890, 73 s elapsed, 9 s since last print i=100, val=99009.9, 81 s elapsed, 8 s since last print Figure Two: Output from time-waster on mosix2 [forrest@mosix2 forrest]$ mosrun -h ./time-waster i=0, val=1e+07, 4 s elapsed, 4 s since last print i=10, val=909091, 40 s elapsed, 36 s since last print i=20, val=476190, 76 s elapsed, 36 s since last print i=30, val=322581, 112 s elapsed, 36 s since last print i=40, val=243902, 148 s elapsed, 36 s since last print i=50, val=196078, 184 s elapsed, 36 s since last print i=60, val=163934, 220 s elapsed, 36 s since last print i=70, val=140845, 257 s elapsed, 37 s since last print i=80, val=123457, 293 s elapsed, 36 s since last print i=90, val=109890, 329 s elapsed, 36 s since last print i=100, val=99009.9, 365 s elapsed, 36 s since last print When executed on mosix2 without using the mosrun command, the time-waster process is quickly migrated over to mosix1 as can be seen in Figure Three. Because the delta from i=0 to i=10 is the same as it was when we ran on mosix1 (seven seconds), we can conclude that the process was migrated before i even became 1. Figure Three: Output from time-waster with process migration [forrest@mosix2 forrest]$ ./time-waster i=0, val=1e+07, 3 s elapsed, 3 s since last print i=10, val=909091, 10 s elapsed, 7 s since last print i=20, val=476190, 18 s elapsed, 8 s since last print i=30, val=322581, 27 s elapsed, 9 s since last print i=40, val=243902, 34 s elapsed, 7 s since last print i=50, val=196078, 43 s elapsed, 9 s since last print i=60, val=163934, 51 s elapsed, 8 s since last print i=70, val=140845, 59 s elapsed, 8 s since last print i=80, val=123457, 67 s elapsed, 8 s since last print i=90, val=109890, 75 s elapsed, 8 s since last print i=100, val=99009.9, 84 s elapsed, 9 s since last print The mosrun program can also be used to allocate processes to certain sets of nodes or provide tuning hints about the code to be run. Another useful utility, migrate, can be used to request migration of a particular process sending it back home or having MOSIX decide where it should run based on current loads. Additional utilities, including mosctl, setpe, and tune, control process migration and calibrate kernel parameters for processor and network configurations. Monitoring Tools To monitor the state of a MOSIX cluster, use the mon Program. As shown in Figure Four, mon provides a graphical view of system load. Note that node 4 (that failed earlier) is unavailable and not shown in the graph. Mon can also display graphs of processor speeds, total system memories, memory utilization, and processor availability. Another monitoring tool available separately for MOSIX is Mosixview (). It provides a graphical front end to mosctl and offers a better view of node utilization (see Figure Five). It simultaneously shows cluster node availability, processor speed, system load, memory utilization, total memory, and number of CPUs. In addition, there are tools for collecting run-time statistics of resource utilization that can be plotted and analyzed. The displays in Figure Four and Figure Five were obtained at approximately the same time after time-waster had been started on nodes 2, 3, and 5. As you can see in both figures, the processes were migrated from nodes 3 and 5 (the two slowest nodes) onto nodes 1 and 2 (the two fastest nodes). MFS: The MOSIX File System The MOSIX File System lets you easily access files on all cluster nodes. A look at /mfs on mosix1 (Figure Six) shows directories for nodes 1, 2, 3, and 5. Since node 4 is unavailable, no directory is displayed. In addition to directories that take you to the file systems on each node, there are also special directory entries that point to the appropriate node number which programs can use at runtime (note that here in /mfs points to node 1, where we are logged on). Figure Six: A view into mosix1’s MFS [forrest@mosix1 forrest]$ cd /mfs [forrest@mosix1 mfs]$ ls -l ls: 4: Object is remote total 16 drwxr-xr-x 20 root root 4096 Mar 8 22:13 1 drwxr-xr-x 20 root root 4096 Mar 8 1996 2 drwxr-xr-x 21 root root 4096 Mar 8 23:25 3 drwxr-xr-x 20 root root 4096 Mar 8 22:59 5 lr-xr-xr-x 1 root root 1 Dec 31 1969 here -> 1 lr-xr-xr-x 1 root root 1 Dec 31 1969 home -> 1 lr-xr-xr-x 1 root root 1 Dec 31 1969 lastexec -> 1 lr-xr-xr-x 1 root root 1 Dec 31 1969 magic -> 1 lr-xr-xr-x 1 root root 1 Dec 31 1969 selected -> 1 [forrest@mosix1 mfs]$ ls 3 bin dev home lib mfs mnt proc sbin usr work boot etc initrd lost+found misc opt root tmp var The Future of MOSIX MOSIX is still being expanded by Professor Barak and his colleagues. Separately, the code for MOSIX was recently forked to create a GPL-compliant clustering platform called openMosix (.) Check out both solutions to see which better meets your needs. Right Sizing Blades for the Midmarket Five Myths About Blade Servers Master of Puppet: System Management Made Easy Everything You Need - How blades grow, protect and simplify Alanna Dwyer Talks About Shorty
http://www.linux-mag.com/id/1081
crawl-002
refinedweb
2,496
64.85
fixdiv - Man Page Fixed point division. Allegro game programming library. Synopsis #include <allegro.h> fixed fixdiv(fixed x, fixed y); Description A fixed point value can be divided by an integer with the normal `/' operator. To divide two fixed point values, though, you must use this function. If a division by zero occurs, `errno' will be set and the maximum possible value will be returned, but `errno' is not cleared if the operation is successful. This means that if you are going to test for division by zero you should set `errno=0' before calling fixdiv(). Example: fixed result; /* This will put 0.06060 `result'. */ result = fixdiv(itofix(2), itofix(33)); /* This will put 0 into `result'. */ result = fixdiv(0, itofix(-30)); /* Sets `errno' and puts -32768 into `result'. */ result = fixdiv(itofix(-100), itofix(0)); ASSERT(!errno); /* This will fail. */ Return Value Returns the result of dividing `x' by `y'. If `y' is zero, returns the maximum possible fixed point value and sets `errno' to ERANGE. See Also fixadd(3), fixsub(3), fixmul(3), exfixed(3) Referenced By exfixed(3), fixadd(3), fixmul(3), fixsub(3).
https://www.mankier.com/3/fixdiv
CC-MAIN-2021-17
refinedweb
186
60.31
The pygame.freetypeEnhanced Pygame module for loading and rendering computer fonts module is a replacement for pygame.fontpygame module for loading and rendering fonts. It has all of the functionality of the original, plus many new features. Yet is has absolutely no dependencies on the SDL_ttf library. It is implemented directly on the FreeType 2 library. The pygame.freetypeEnhanced Pygame module for loading and rendering computer fonts module is not itself backward compatible with pygame.fontpygame module for loading and rendering fonts. Instead, use the pygame.ftfont module as a drop-in replacement for pygame.fontpygame module for loading and rendering fonts. All font file formats supported by FreeType can be rendered by pygame.freetypeEnhanced Pygame module for loading and rendering computer fonts, namely TTF, Type1, CFF, OpenType, SFNT, PCF, FNT, BDF, PFR and Type42 fonts. All glyphs having UTF-32 code points are accessible (see ucs4). Most work on fonts is done using Font instances. The module itself only has routines for initialization and creation of Font objects. You can load fonts from the system using the SysFont() function. Extra support of bitmap fonts is available. Available bitmap sizes can be listed (see Font.get_sizes()). For bitmap only fonts Font can set the size for you (see the Font class size argument). For now undefined character codes are replaced with the .notdef (not defined) character. How undefined codes are handled may become configurable in a future release. Pygame comes with a builtin default font. This can always be accessed by passing None as the font name to the Font constructor. Extra rendering features available to pygame.freetype.FontCreate a new Font instance from a supported font file. are direct to surface rendering (see Font.render_to()), character kerning (see Font.kerning), vertical layout (see Font.vertical), rotation of rendered text (see rotation), and the strong style (see Font.strong). Some properties are configurable, such as strong style strength (see Font.strength) and underline positioning (see underline_adjustment). Text can be positioned by the upper right corner of the text box or by the text baseline (see Font.origin). Finally, a font’s vertical and horizontal size can be adjusted separately (see Font.size). The pygame.examples.freetype example (pygame.examples.freetype_misc.main()run a freetype rendering example) shows these features in use. The Pygame package does not import freetype automatically when loaded. This module must be imported explicitly to be used. import pygame import pygame.freetype The freetype module is new in Pygame 1.9.2 Return a description of the last error which occurred in the FreeType2 library, or None if no errors have occurred. Returns the version of the FreeType library in use by this module. Note that the freetype module depends on the FreeType 2 library. It will not compile with the original FreeType 1.0. Hence, the first element of the tuple will always be “2”. This function initializes the underlying FreeType library and must be called before trying to use any of the functionality of the freetype module. However, pygame.init()initialize all imported pygame modules will automatically call this function if the freetype module is already imported. It is safe to call this function more than once. Optionally, you may specify a default cache_size for the Glyph cache: the maximum number of glyphs that will be cached at any given time by the module. Exceedingly small values will be automatically tuned for performance. Also a default pixel resolution, in dots per inch, can be given to adjust font scaling. This function closes the freetype module. After calling this function, you should not invoke any class, method or function related to the freetype module as they are likely to fail or might give unpredictable results. It is safe to call this function even if the module hasn’t been initialized yet. Returns whether the the FreeType library is initialized. See pygame.freetype.init()Initialize the underlying FreeType library.. Returns the default pixel size, in dots per inch, for the module. The default is 72dpi. Set the default pixel size, in dots per inch, for the module. If the optional argument is omitted or zero the resolution is reset to 72dpi. Return a new Font object that is loaded from the system fonts. The font will match the requested bold and italic flags. If a suitable system font is not found the default, Pygame, is returned instead. The font name can be a comma separated list of font names to search for. Return the filename of the default Pygame font. This is not the full path to the file. The file is usually in the same directory as the font module, but can also be bundled in a separate archive. Argument file can be either a string representing the font’s filename, a file-like object containing the font, or None; if None, the default, Pygame, font is used. Optionally, a size argument may be specified to set the default size in points, which determines the size of the rendered characters. The size can also be passed explicitly to each method call. Because of the way the caching system works, specifying a default size on the constructor doesn’t imply a performance gain over manually passing the size on each function call. If the font is bitmap and no size is given, the default size is set to the first available size for the font. freetype.init(), is used. The Font object’s resolution can only be changed by re-initializing the Font instance. The optional ucs4 argument, an integer, sets the default text translation mode: 0 (False) recognize UTF-16 surrogate pairs, any other value (True), to treat Unicode text as UCS-4, with no surrogate pairs. See Font.ucs4. Read only. Returns the real (long) name of the font, as recorded in the font file. Read only. Returns the path of the loaded font file Get or set the default size for text metrics and rendering. It can be a single point size, given as an Python int or float, or a font ppem (width, height) tuple. Size values are non-negative. A zero size or width represents an undefined size. In this case the size must be given as a method argument, or an exception is raised. A zero width but non-zero height is a ValueError. For a scalable font, a single number value is equivalent to a tuple with width equal height. A font can be stretched vertically with height set greater than width, or horizontally with width set greater than height. For embedded bitmaps, as listed by get_sizes(), use the nominal width and height to select an available size. Font size differs for a non-scalable, bitmap, font. During a method call it must match one of the available sizes returned by method get_sizes(). If not, an exception is raised. If the size is a single number, the size is first matched against the point size value. If no match, then the available size with the same nominal width and height is chosen. Gets the final dimensions and origin, in pixels, of text using the optional size in points, style, and rotation. For other relevant render properties, and for any optional argument not given, the default values set for the Font instance are used. Returns a Rect instance containing the width and height of the text’s bounding box and the position of the text’s origin. The origin is useful in aligning separately rendered pieces of text. It gives the baseline position and bearing at the start of the text. See the render_to() method for an example. If text is a char (byte) string, its encoding is assumed to be LATIN1. Optionally, text can be None, which will return the bounding rectangle for the text passed to a previous get_rect(), render(), render_to(), render_raw(), or render_raw_to() call. See render_to() for more details. Returns the glyph metrics for each character in text. The glyph metrics are returned as a list of tuples. Each tuple gives metrics of a single character glyph. The glyph metrics are: (min_x, max_x, min_y, max_y, horizontal_advance_x, horizontal_advance_y) The bounding box min_x, max_y, min_y, and max_y values are returned as grid-fitted pixel coordinates of type int. The advance values are float values. The calculations are done using the font’s default size in points. Optionally you may specify another point size with the size argument. The metrics are adjusted for the current rotation, strong, and oblique settings. If text is a char (byte) string, then its encoding is assumed to be LATIN1. Read only. Gets the height of the font. This is the average value of all glyphs in the font. Read only. Return the number of units from the font’s baseline to the top of the bounding box. Read only. Return the height in font units for the font descent. The descent is the number of units from the font’s baseline to the bottom of the bounding box. Return the number of units from the font’s baseline to the top of the bounding box. It is not adjusted for strong or rotation. Return the number of pixels from the font’s baseline to the top of the bounding box. It is not adjusted for strong or rotation. Returns the height of the font. This is the average value of all glyphs in the font. It is not adjusted for strong or rotation. Return the glyph bounding box height of the font in pixels. This is the average value of all glyphs in the font. It is not adjusted for strong or rotation. Returns a list of tuple records, one for each point size supported. Each tuple containing the point size, the height in pixels, width in pixels, horizontal ppem (nominal width) in fractional pixels, and vertical ppem (nominal height) in fractional pixels. Returns a new Surface, with the text rendered to it in the color given by ‘fgcolor’. If no foreground color is given, the default foreground color, fgcolor is used. If bgcolor is given, the surface will be filled with this color. When no background color is given, the surface background is transparent, zero alpha. Normally the returned surface has a 32 bit pixel size. However, if bgcolor is None and anti-aliasing is disabled a monochrome 8 bit colorkey surface, with colorkey set for the background color, is returned. The return value is a tuple: the new surface and the bounding rectangle giving the size and origin of the rendered text. If an empty string is passed for text then the returned Rect is zero width and the height of the font. Optional fgcolor, style, rotation, and size arguments override the default values set for the Font instance. If text is a char (byte) string, then its encoding is assumed to be LATIN1. Optionally, text can be None, which will render the text passed to a previous get_rect(), render(), render_to(), render_raw(), or render_raw_to() call. See render_to() for details. Renders the string text to the pygame.Surfacepygame object for representing images surf, at position dest, a (x, y) surface coordinate pair. If either x or y is not an integer it is converted to one if possible. Any sequence where the first two items are x and y positional elements is accepted, including a Rect instance. As with render(), optional fgcolor, style, rotation, and size argument are available. If a background color bgcolor is given, the text bounding box is first filled with that color. The text is blitted next. Both the background fill and text rendering involve full alpha blits. That is, the alpha values of the foreground, background, and destination target surface all affect the blit. The return value is a rectangle giving the size and position of the rendered text within the surface. If an empty string is passed for text then the returned Rect is zero width and the height of the font. The rect will test False. Optionally, text can be set None, which will re-render text passed to a previous render_to(), get_rect(), render(), render_raw(), or render_raw_to() call. Primarily, this feature is an aid to using render_to() in combination with get_rect(). An example: def word_wrap(surf, text, font, color=(0, 0, 0)): font.origin = True words = text.split(' ') width, height = surf.get_size() line_spacing = font.get_sized_height() + 2 x, y = 0, line_spacing space = font.get_rect(' ') for word in words: bounds = font.get_rect(word) if x + bounds.width + bounds.x >= width: x, y = 0, y + line_spacing if x + bounds.width + bounds.x >= width: raise ValueError("word too wide for the surface") if y + bounds.height - bounds.y >= height: raise ValueError("text to long for the surface") font.render_to(surf, (x, y), None, color) x += bounds.width + space.width return x, y When render_to() is called with the same font properties ― size, style, strength, wide, antialiase, vertical, rotation, kerning, and use_bitmap_strikes ― as get_rect(), render_to() will use the layout calculated by get_rect(). Otherwise, render_to() will recalculate the layout if called with a text string or one of the above properties has changed after the get_rect() call. If text is a char (byte) string, then its encoding is assumed to be LATIN1. Like render() but with the pixels returned as a byte string of 8-bit gray-scale values. The foreground color is 255, the background 0, useful as an alpha mask for a foreground pattern. Render to an array object exposing an array struct interface. The array must be two dimensional with integer items. The default dest value, None, is equivalent to position (0, 0). See render_to(). As with the other render methods, text can be None to render a text string passed previously to another method. Gets or sets the default style of the Font. This default style will be used for all text rendering and size calculations unless overridden specifically in the render() or get_size() calls. The style value may be a bit-wise OR of one or more of the following constants: STYLE_NORMAL STYLE_UNDERLINE STYLE_OBLIQUE STYLE_STRONG STYLE_WIDE STYLE_DEFAULT These constants may be found on the FreeType constants module. Optionally, the default style can be modified or obtained accessing the individual style attributes (underline, oblique, strong). The STYLE_OBLIQUE and STYLE_STRONG styles are for scalable fonts only. An attempt to set either for a bitmap font raises an AttributeError. An attempt to set either for an inactive font, as returned by Font.__new__(), raises a RuntimeError. Assigning STYLE_DEFAULT to the style property leaves the property unchanged, as this property defines the default. The style property will never return STYLE_DEFAULT. Gets or sets whether the font will be underlined when drawing text. This default style value will be used for all text rendering and size calculations unless overridden specifically in the render() or get_size() calls, via the ‘style’ parameter. Gets or sets whether the font will be bold when drawing text. This default style value will be used for all text rendering and size calculations unless overridden specifically in the render() or get_size() calls, via the ‘style’ parameter. Gets or sets whether the font will be rendered as oblique. This default style value will be used for all text rendering and size calculations unless overridden specifically in the render() or get_size() calls, via the style parameter. The oblique style is only supported for scalable (outline) fonts. An attempt to set this style on a bitmap font will raise an AttributeError. If the font object is inactive, as returned by Font.__new__(), setting this property raises a RuntimeError. Gets or sets whether the font will be stretched horizontally when drawing text. It produces a result similar to pygame.font.Fontcreate a new Font object from a file‘s bold. This style not available for rotated text. The amount by which a font glyph’s size is enlarged for the strong or wide transformations, as a fraction of the untransformed size. For the wide style only the horizontal dimension is increased. For strong text both the horizontal and vertical dimensions are enlarged. A wide style of strength 0.08333 ( 1/12 ) is equivalent to the pygame.font.Fontcreate a new Font object from a file bold style. The default is 0.02778 ( 1/36 ). The strength style is only supported for scalable (outline) fonts. An attempt to set this property on a bitmap font will raise an AttributeError. If the font object is inactive, as returned by Font.__new__(), assignment to this property raises a RuntimeError. Gets or sets a factor which, when positive, is multiplied with the font’s underline offset to adjust the underline position. A negative value turns an underline into a strike-through or overline. It is multiplied with the ascender. Accepted values range between -2.0 and 2.0 inclusive. A value of 0.5 closely matches Tango underlining. A value of 1.0 mimics pygame.font.Fontcreate a new Font object from a file underlining. Read only. Returns True if the font contains fixed-width characters (for example Courier, Bitstream Vera Sans Mono, Andale Mono). Read only. Returns the number of point sizes for which the font contains bitmap character images. If zero then the font is not a bitmap font. A scalable font may contain pre-rendered point sizes as strikes. Read only. Returns True if the font contains outline glyphs. If so, the point size is not limited to available bitmap sizes. Some scalable fonts include embedded bitmaps for particular point sizes. This property controls whether or not those bitmap strikes are used. Set it False to disable the loading of any bitmap strike. Set it True, the default, to permit bitmap strikes for a nonrotated render with no style other than wide or underline. This property is ignored for bitmap fonts. See also fixed_sizes and get_sizes(). Gets or sets the font’s anti-aliasing mode. This defaults to True on all fonts, which are rendered with full 8 bit blending. Set to False to do monochrome rendering. This should provide a small speed gain and reduce cache memory size. Gets or sets the font’s kerning mode. This defaults to False on all fonts, which will be rendered without kerning. Set to True to add kerning between character pairs, if supported by the font, when positioning glyphs. Gets or sets whether the characters are laid out vertically rather than horizontally. May be useful when rendering Kanji or some other vertical script. Set to True to switch to a vertical text layout. The default is False, place horizontally. Note that the Font class does not automatically determine script orientation. Vertical layout must be selected explicitly. Also note that several font formats (especially bitmap based ones) don’t contain the necessary metrics to draw glyphs vertically, so drawing in those cases will give unspecified results. Gets or sets the baseline angle of the rendered text. The angle is represented as integer degrees. The default angle is 0, with horizontal text rendered along the X-axis, and vertical text along the Y-axis. A positive value rotates these axes counterclockwise that many degrees. A negative angle corresponds to a clockwise rotation. The rotation value is normalized to a value within the range 0 to 359 inclusive (eg. 390 -> 390 - 360 -> 30, -45 -> 360 + -45 -> 315, 720 -> 720 - (2 * 360) -> 0). Only scalable (outline) fonts can be rotated. An attempt to change the rotation of a bitmap font raises an AttributeError. An attempt to change the rotation of an inactive font instance, as returned by Font.__new__(), raises a RuntimeError. Gets or sets the default glyph rendering color. It is initially opaque black ― (0, 0, 0, 255). Applies to render() and render_to(). If set True, render_to() and render_to_raw() will take the dest position to be that of the text origin, as opposed to the top-left corner of the bounding box. See get_rect() for details. If set True, then the text boundary rectangle will be inflated to match that of font.Font. Otherwise, the boundary rectangle is just large enough for the text. Gets or sets the decoding of Unicode text. By default, the freetype module performs UTF-16 surrogate pair decoding on Unicode text. This allows 32-bit escape sequences (‘Uxxxxxxxx’) between 0x10000 and 0x10FFFF to represent their corresponding UTF-32 code points on Python interpreters built with a UCS-2 unicode type (on Windows, for instance). It also means character values within the UTF-16 surrogate area (0xD800 to 0xDFFF) are considered part of a surrogate pair. A malformed surrogate pair will raise a UnicodeEncodeError. Setting ucs4 True turns surrogate pair decoding off, allowing access the full UCS-4 character range to a Python interpreter built with four byte unicode character support. Read only. Gets pixel size used in scaling font glyphs for this Font instance.
http://pygame.org/docs/ref/freetype.html
CC-MAIN-2015-48
refinedweb
3,449
67.25
A fresh WebStorm 12 EAP build (144.2925) is now available! You can download it here and install it side-by-side with your stable version of WebStorm. Or if you’re subscribed to the EAP update channel in the IDE and have already installed a previous EAP build, you should get a notification about a patch-update. If you missed the announcement of WebStorm 12 EAP, you can catch up on it in this blog post. Inline rename for TypeScript You can now use Rename refactoring in the TypeScript code inline. That means that you can just hit Shift-F6, change the name right in the editor and it will be instantly refactored across the whole project, no Rename dialog anymore. Smarter imports Auto imports in TypeScript became smarter: now symbols from one module are automatically added into one import statement. Before that, when you type, for example, @Injectable, the import would have generated the following way: And now: This behaviour could be configured in Preferences | Editor | General | Auto imports. And with a new Optimize imports actions (Ctrl-Alt-O) WebStorm can merge imports that are already typed in the similar way. This works now only for the TypeScript files, but we will extend it to ES6 files as well. Debugging async code WebStorm now allows you to debug asynchronous client-side code in Chrome: check the Async checkbox on the debugger pane and now once a breakpoint inside an asynchronous function is hit or you step into that code, you can see a full call stack, including a caller and all the way to the beginning of asynchronous actions. Improvements in Angular 2 support This WebStorm 12 EAP build brings smarter code insight for one-way binding in Angular 2 applications. Now you can get jump from the binding in the component usage to the property in the component definition. Coding assistance for component names in HTML and event attributes is now fixed when working with angular2-beta. WebStorm running on Java 8 The whole IntelliJ IDEA platform migrates to Java 8. That means that now you can not launch WebStorm under a JDK older than Java 8. The change affects all the EAP builds (144.*) and further major releases in Spring, 2016. The list of issues addressed in this EAP build is available in the Release notes. Please report your feedback to our issue tracker. To get notifications of new EAP builds as they become available, subscribe to the EAP channel in Preferences | Appearance & Behavior | System Settings | Updates. Read more about the features and improvements added in the WebStorm 12 EAP builds: - WebStorm 12 EAP, 144.3143: Unused imports warning, code assistance in tsconfig.json,.babelrc and .eslintrc, remote run and debug for Node.js apps, Vagrant integration, debugging Electron apps, and further improvement in Angular 2 support. – JetBrains WebStorm Team “Reformat Code” get “TSLint: missing whitespace” at the TypeScript. “Format Code” in the VisualStudio Code constructor(name: string) { WebStorm 11 & 12 constructor(name:string) { You can enable Spaces – After type reference colon ‘:’ option in TypeScript code style in Preferences. New WS icon is pretty similar to CMD icon in taskbar on Windows which is confusing. import * as SomeVar from ‘xxx/yyy’; WebStorm doesn’t recognize the askeyword; Have you set JavaScript version to ECMAScript 6 in Preferences | Languages and Frameworks | JavaScript? “The whole IntelliJ IDEA platform migrates to Java 8. That means that now you can launch WebStorm only under a JDK older than Java 8” I didn’t get that. Maybe it was intended as “now you can launch WebStorm under JDK 8 *and newer*” or “now you *cannot* launch WebStorm under JDK older than Java 8” Thank you for noticing, fixed now Any plans to provide the Coding assistance for component names in HTML for Dart Angular2 projects? No plans for that at the moment and even no feature request for that yet. Would be great if you submit it on our issue tracker: Thank you! Are there any plans to support resolving JSPM imports in this version? Great product! We don’t have any precise plans to support JSPM at the moment, sorry. Please follow the issue to stay tuned. May one ask why not? We are planning to release the next major update very soon and we haven’t started working on any JSPM support yet. There’re issue on our tracker that have a higher priority at the moment.
https://blog.jetbrains.com/webstorm/2016/01/webstorm-12-144-2925/?replytocom=264285
CC-MAIN-2019-18
refinedweb
738
62.27
big used big used Hi, I read these questions ya its fine but i want Hi, I read these questions ya its fine but i want more details abount every concept please provide more interview questions please provide more interview questions Please provide the interview questions relevantly Please provide the interview questions relevantly and shortly Please provide some more FAQ's. I think these are Please provide some more FAQ's. I think these are not enough.But whatever the information you provided is ok The question is very useful for the interview . i The question is very useful for the interview . i got most of the questions from this tutorial I auspiciously thankful for this. BigDecimal lo_PartyCall = null; the code abo BigDecimal lo_PartyCall = null; the code above is fine or not what are the volatile variables explains with e what are the volatile variables explains with examples I have started a Beginner Java Tutorial website na I have started a Beginner Java Tutorial website namely which is made for beginners to go about Learning Java Programming. I am finding it tough to single handle the site to make it an effective one. So hence I am hi, questions are very knowledgable but should be hi, questions are very knowledgable but should be more on each topic Very Useful Very Useful excellent excellent send more no of interview questions with full expl send more no of interview questions with full explanation.send on many languages also what is hashtable? what is hashtable? Gud but not enough Gud but not enough need more interview questions need more interview questions 1.why we will give main class name and file name i 1.why we will give main class name and file name is same? 2.why java didn't support global variables? pls provide more faqs pls provide more faqs what is method level exception & when it is trowab what is method level exception & when it is trowable. Questions are good but not enough Questions are good but not enough I like very much. I like very much. I will send to Email iD please I will send to Email iD please very good. provide some more questions. answer very good. provide some more questions. answer for How you can force the garbage collection? is wrong. right?? we can force using System.gc(); and Runtime.getRuntime().gc(); the question are very useful. still provide some m the question are very useful. still provide some more question and answer. this is good but not suitable for fresher this is good but not suitable for fresher Questions are good based on basic concepts. Questions are good based on basic concepts. hi Questions are good but not enough hi Questions are good but not enough It is very clear to understand It is very clear to understand These questions are good....But I want more concep These questions are good....But I want more conceptual questions give me more infomation about this give me more infomation about this Please provide interview questions for spring and Please provide interview questions for spring and hibernate also helpful.thanx a lot helpful.thanx a lot hai , someone ask what is hashtable. the hai , someone ask what is hashtable. the table contains keys.that contain like this 0 ram 1 suresh 2 mahesh hai raju, this is ramesh u r asked question hai raju, this is ramesh u r asked question is y java not provide global variables. do u know y java and c++ come in picture the main thing is data security.if u use glabal variables that means it is accesed through out but it spoils the encap which site is good for examples of core java which site is good for examples of core java hai venkat , thread means a program with hai venkat , thread means a program with single flow of control is called thread.for ex if u take main() a thread is created and start exection the statements what u r written in that. multithread means execution of multi threads at the sam fine but need some more question of Core Java. fine but need some more question of Core Java. In JDBC Connection is an INTERFACE eventhough we h In JDBC Connection is an INTERFACE eventhough we have to create an Object for it,actually we cannot create objects for INTERFACES,what are the possibilities to create an object for interface? could u plz explain more about threads,multithread could u plz explain more about threads,multithreadind,synchronisation? plz explain with a real time example? Thanx 4 d giving information about java its very i Thanx 4 d giving information about java its very important and very useful information...... Good question but not enough for freasher Good question but not enough for freasher hai padmaja, u r asking abt virtual func hai padmaja, u r asking abt virtual function. in the case of overridding .when u call the base class fuction the dervived class function is called and executed in that if u want to execute the function what u need in that sitution we declar good but need still more info. good but need still more info. hi all, what are virtual functions and methods . hi all, what are virtual functions and methods . pls explain with an aexample...... i want interview questions on java i want interview questions on java Hi! Rama krishna Your queries has been solved. Hi! Rama krishna Your queries has been solved. Question: what is the class variables ? Answer: When we create a number of objects of the same class, then each object will share a common copy of variables. That means that there is only one copy what is the diff b/w static variable and instance what is the diff b/w static variable and instance variable? very useful information in interview point of vie very useful information in interview point of view but need some more information I got a greate collection of Java Base Interview Q I got a greate collection of Java Base Interview Question and Answer, on this site. My Suggestion is please provide more question. Put them according to the category. It will make this more impressive. All the best Vivek Hi ,i want 2 clear with this Q. if public,priva Hi ,i want 2 clear with this Q. if public,private,protected and default are the access controls means,what are the access modifiers in java. Plz send me some important interview questions & a Plz send me some important interview questions & answers of c language,core java & vb.net.I'll be really gratefull. hi,it's very beneficial for both language learners hi,it's very beneficial for both language learners and job seekers.I hope this will help those who aspire to specialize in java. Hi its very useful in the interview point of view Hi its very useful in the interview point of view I think some more clear explanation needed for every topic how to devlope a calculator program using java how to devlope a calculator program using java pl gave some question in thread in java program.pl pl gave some question in thread in java program.pl Good Questions for freshers. Good Questions for freshers. its good q ans my q is why we write static its good q ans my q is why we write static in main. public static void main(string agrs[]) can any one send me game code for PUYO PUYO can any one send me game code for PUYO PUYO Its a very useful site. I would like it to contain Its a very useful site. I would like it to contain more valuable sets of questions and answers i read all q,but i think it is not enough.Could an i read all q,but i think it is not enough.Could anybody give me detail question related core java. it is good site to know about Java Concept but not it is good site to know about Java Concept but not very good., So please arrange too much Question, Thanks thanks for giving nice information thanks for giving nice information what is the class variables ? what is the purpose what is the class variables ? what is the purpose of getclass ? what is the difference between the instanceof and getclass,these two r same or not ? i want to about java.io.*;package.About this input i want to about java.io.*;package.About this input/output stream. what is different between print(),println()& write()? Very good site...but more questions should be adde Very good site...but more questions should be added..and in brief I have a problem here in this question , What a I have a problem here in this question , What are Access Specifiers available in Java? The websites claims that there are 4 , Mind you. There aren't 4 access modifiers , there are 4 access controls i.e public , private,protected and default , N Could you explain me the difference between Swing Could you explain me the difference between Swing and AWT in detail? can static method override? can static method override? Very nice. I used to doubt what's the point to cra Very nice. I used to doubt what's the point to cram these basic concepts, but now I find it's really important to know this coz it may help you to get confused. hello pls send me all J2EE(core ,servlets,jsp,ejb hello pls send me all J2EE(core ,servlets,jsp,ejb ect..) Materials/FAQs to my mail id please give questions related to programming conce please give questions related to programming concepts with alittle example and provide further questions on their atomic words like static,final,interface,class etc. please send more interview questions please send more interview questions can you give one example for transient variable can you give one example for transient variable sir pls give the detail of core java &j2ee intervi sir pls give the detail of core java &j2ee interviewquistion . thanx sasanka i read those questions.it is very useful for me. i read those questions.it is very useful for me. i have doubt following program.can u plz clear for me. my question is private class A { void kani() System.out.println("kani"); } public class B extends A { void valavan() System.out.print sir i want java interview questions can u pl send sir i want java interview questions can u pl send me sir to my mail Hi..... This is nagesh... The difference between Hi..... This is nagesh... The difference between final,finally & finalize is final:It is of variable scope,used for declaring the varibales,which when declared the variable is constant through the application. finally:It is a block written afte private class A { void kani() System.out.printl private class A { void kani() System.out.println("kani"); } public class B extends A { void valavan() System.out.println("valavan"); } class C { public static void main(String args[]) { A a=new A(); B b=new B(); a.kani(); b.valavan() Very Informative...... Very Informative...... A good amount of infomation about RMI is needed. A good amount of infomation about RMI is needed. If u might have attended interviews,in that some t If u might have attended interviews,in that some traing questions are asked .lz ostr those kind of qustions. IT will be good for others. thanks This site is Gooooooooooooooood This site is Gooooooooooooooood can u give me the complete material for struts jsp can u give me the complete material for struts jsp servlets and core java ,sir? thanking you regards BH.Phani Shekhar I need more questin and answers on java. On each a I need more questin and answers on java. On each and every topic. these matters r very benificial for any java inter these matters r very benificial for any java interviev. can u explain is this program will compile or not, can u explain is this program will compile or not, but why? public class A { //body of class } public static void main(String args[]) { A a=new a(); a.methodname(); } I don't know if its allowed to reply here..... But I don't know if its allowed to reply here..... But yes, i just saw one of the doubts n felt like sharing something.....the one posted by "bindu". I think its showing an error because you haven't defined the pause() method in class B. In interface, yo Hi I read those questions .These are usefu Hi I read those questions .These are useful to me what im going to face in interview regarding java. and i also need more detailed about those questions. what is oop's concept? what is Encapsulation, Poly what is oop's concept? what is Encapsulation, Polymorphism and Inheritance? where we are used in programs? give me examples? Its really helpful site and cover basic interview Its really helpful site and cover basic interview questions. what is thread safe? what is thread safe? Good, to the point descriptions... Good, to the point descriptions... Respected sir, I need complete mterial of core Respected sir, I need complete mterial of core java and j2ee interview questions . Will u pls send it to my email id. thanking u sir hi!as i am the student of engg.(electronics& comm) hi!as i am the student of engg.(electronics& comm),so if u sent those questions on core java which aer common or offently asked in our interviews (software company). Nijel's comments below are incorrect, you cannot f Nijel's comments below are incorrect, you cannot force garbage collection, you can only recommend it. The JVM might execute garbage collection when you run gc command most of the time but it is not garunteed! What is Instance variables? What is Instance variables? Breifly Explain abt Instance variable ? Breifly Explain abt Instance variable ? explain final,finally,finalize explain final,finally,finalize sir I need quick review FAQS of core java and j2e sir I need quick review FAQS of core java and j2ee interview questions . Will u please send to my email thank you Thanks. what is the work that had given in the co Thanks. what is the work that had given in the companies?and everything with detailed. hi all, i want example programs on vituall f hi all, i want example programs on vituall functions with clear explanation could u people provide this. tanx in advance hi, my name is bhanu i would like to learn java hi, my name is bhanu i would like to learn java from the web. so what can i do? please tell your opinion. thanks for some great questions i would like to k thanks for some great questions i would like to know wht is meant by copy constructors. tricky questions answered in simple ways which is tricky questions answered in simple ways which is easy to understand and a great learning time Very good basic questions are explained in a simpl Very good basic questions are explained in a simple,understandable way.Too good. hi the questions are of great use especially for hi the questions are of great use especially for the freshers but can be made more useful if more questions are addded in a proper format means topic wise and with easy explaination. sometimes very silly and easy questions are asked sometimes very silly and easy questions are asked in the interview which we think very easy but find it hard to answer.so please put such questions which can crossed questioned. good. but need more question and answers. it would good. but need more question and answers. it would be useful. its very useful its very useful sir I need complete mterial of core java and sir I need complete mterial of core java and j2ee interview questions . Will u pls send to my email id. thanking u sir sameena DESCRIPTION OF THE TEST PURPOSE: The goal of th DESCRIPTION OF THE TEST PURPOSE: The goal of this exercise is to develop a JAVA version of Puyo-Puyo, a variation of the Tetris game. We are interested in seeing your code writing skills, style and logic. Don't hesitate to comment on your co Thanks for such a nice information and i want abou Thanks for such a nice information and i want about full details about corejava.Especially everyone saying is jobs will come onthis category only ,but what about advanced java and J2EE.and what is the work that had given in the companies?and everythi try this kanivalavn class A { void kani() { try this kanivalavn class A { void kani() { System.out.println("kani"); } } class B extends A { void valavan() { System.out.println("valavan"); } } class Exam { public static void main(String args[]) { A a=new A(); B b=new i need to java FAQ with answere as well as possibl i need to java FAQ with answere as well as possible. pls send any metirial and java interview questions pls send any metirial and java interview questions on java and data structures(through java ) Very good interactive site. Very good interactive site. The Questions provided here are very useful. I mys The Questions provided here are very useful. I myself came acroos most of them during my earlier intervies. I'd like you people to add more questions to this section on Inheritance, Polymorphism, Encapsulation, Data Abstraction, Legacy Classes, and I sir I need complete mterial of core java and j sir I need complete mterial of core java and j2ee interview questions . Will u pls send to my email id. thanking u sir Ragu answers to most of the questions have been very cl answers to most of the questions have been very clear and good.but it would be nice if you could give us more questions and answers. what a different b/w hashmap and TreeMap what a different b/w hashmap and TreeMap complete faqs on Exception Handling complete faqs on Exception Handling what is difference between access specifiers and m what is difference between access specifiers and modifiers sir I need material of core java and j2ee interv sir I need material of core java and j2ee interview questions . Will u pleass send to my email thank you i want questions and answer regardingcore and adva i want questions and answer regardingcore and advance java for my interviews I want to appear for Sun Certification pls provide I want to appear for Sun Certification pls provide me with some sample test paters for preparation,which help me to get through the exam sir I need material of core java and j2ee intervi sir I need material of core java and j2ee interview questions . Will u pleass send to my email thank you interface A { void show(); void play(); void p interface A { void show(); void play(); void pause(); } abstract class B implements A { public void show() { System.out.println("show a"); } public void play() { System.out.println("play method"); } } public class Interface { publ Hi I read question realy it helps lots.. Hi I read question realy it helps lots.. yeah this site is quite useful but you should try yeah this site is quite useful but you should try to enhance its vision meaning to say is that more questions and practical problems should be added to the content. An index should also be mentioned at the top of it. Question: What is the difference between the insta Question: What is the difference between the instanceof and getclass, these two are same or not ? Answer: instanceof is a operator, not a function while getClass is a method of java.lang.Object class. Consider a condition where we use if(o.getClas i want to related question of java i want to related question of java cool site. but the explanations need little more c cool site. but the explanations need little more clarity. i need section wise questions to test on my skills on a gegular basis. plz do that........... and last but not the least plz put in some more questions. Dear gulshan(01.2.07 @ 15:51pm | #2420 ), u aske Dear gulshan(01.2.07 @ 15:51pm | #2420 ), u asked... why we write static in main. public static void main(string agrs[]) the solution to this question is we can have multiple functions named main in our program of java but the function whichn Dear venki (Saturday, 01.27.07 @ 17:02pm | #4488) Dear venki (Saturday, 01.27.07 @ 17:02pm | #4488) your program: public class A { //body of class } public static void main(String args[]) { A a=new a(); a.methodname(); true it will not compile . try this A a=new A(); then it will Dear Nijel, answer for How you can force the g Dear Nijel, answer for How you can force the garbage collection? is wrong. right?? we can force using System.gc(); and Runtime.getRuntime().gc(); no i think it is System.gc.Collect(); we have to use the Collect method to call it. why Iterable interface is in Java.lang package why Iterable interface is in Java.lang package what's is the ways through which we make our own c what's is the ways through which we make our own class immutable(eg:-String class)? Good, but need more Good, but need more what are the maximum numbers of statements created what are the maximum numbers of statements created in our jdbc program. Hi, It was really good visiting this site but Hi, It was really good visiting this site but i would like you to add more questions of core java and J2ee. Thanking you hi.. this is kalees i have more doubts in abstra hi.. this is kalees i have more doubts in abstract class and interface so pls clearly explain me..... Pls send me the Cortejava materials/FAQs to my mai Pls send me the Cortejava materials/FAQs to my mail Id I need core java materials.......... pls send me I need core java materials.......... pls send me soon I want to prepare for interview Sir, Please send core java and J2EE question to m Sir, Please send core java and J2EE question to my mail id asap. Thank you in advance. sir i need complete material of core java ,advanc sir i need complete material of core java ,advanced java n j2ee interview questions n answers.can u plz send me all those things then i ll remain oblige 2 u. thanking u sir smruti interview question in java interview question in java please send me complete material of core java ,jsp please send me complete material of core java ,jsp servlets, advanced java n j2ee interview questions n answers as soon as possible. THANK YOU Hello! I have many probs in java........ Hello! I have many probs in java........ I want compleate core & j2ee interview questions a I want compleate core & j2ee interview questions also i want to appear for sun certification can u please provide me matrial,URL which help me. Thanku, Latha. this site is excellent and easy to understand this site is excellent and easy to understand please send a simple example for interfaces in jav please send a simple example for interfaces in java.give one sample program with explanation. Q:why we using the throws IOException in some progams only. where this IOException is required Qtns r satisfiable.i hope that they will be update Qtns r satisfiable.i hope that they will be updated regular includings. could anyone tell the can we edit the values in En could anyone tell the can we edit the values in Enumeration? I know that instance & object are different thing. I know that instance & object are different thing. But i can't explain them. So plz can u make me understand with example.can u send me answer on my e-mail id thanx, Mrugesh Please Send more details of Core java and Advance Please Send more details of Core java and Advance Java with Questions and Answers for which I will be pleased these Questions are Awewsome I have read all and it's nicely presented. ThankYou. All the questions are very useful for me. Thank y All the questions are very useful for me. Thank you. Please send me core java and j2ee full questions t Please send me core java and j2ee full questions to my mail id asap.. thank you in advance core java interview question core java interview question ya it is very helpful for java developer realy f ya it is very helpful for java developer realy fine I want internal depth of interface can any one plz I want internal depth of interface can any one plz? Interview questions Please send me core java and j2ee full questions to my mail id asap.. this is chandra this interview questions useful for job searching persons this is chandra Will you Please send core java and j2ee interview questions to my mail id. Pls Send the complete Interview questions on Java. Hi, Will you Please send core java and j2ee interview questions to my mail id. Thank You plz send me... plz send me core java and swing interview questions to my mail id as soon as possible thirumurthymca@yahoo.co.in Thank You great; I love java, I live java. java dear friends, can u plz end me the examples of implmenting interfaces, abstract classes. and suggest me a nice book on j2ee.(other than complete ref) interview question i,m intressted job in java and my inside the tallent for java programming haii friends right now i am learning java .feel good in this paltform n see this site is fine n suggest u to take a path of this sitewhen u have doubts in java. Please send these question Please send these question in my mail my mail id id <avanish.chester@gmail.com> JAVA J2EE CAN YOU ANY ONE SEND ME, A INSERT,UPDATE,DELETE PROGRAM IN JAVA USING MSSQL SERVER Plz send me material for complete java. Plz send me material for complete java.like starting from core java,servlets,jdbc,jsp,struts ,hibernet,j2ee concept asap. Thank u in advance. java plz send me java core and advanced (structs,servelets) tutorial adress(URLS) j2ee interview question i need some i terview questions of core java and advance j2ee. plz send question ASAP. thanks alok gupta Need Questions Hi ! Can anyone send me some of the important interview qustions in core java... If possible send me also j2ee questions too....... REPLY ME "There is nothing new under the sun" "Except in the computer industry" Can anyone send me some of the important interview qustions in core java required topic hi sir, i'm chandrasekhar working as software developer. here u'r material is so good but we require some more information means faq's on STRINGS and STREAMS. good This is a very good article hope u mail this article to me Questions in core java Hi Friends, This is sekhar.Doing M.C.A.Send me more question&Answers in core java to my email.My id is ahaandhra@rediffmail.com. Pls help me Hi, Any one who knows java, I need some help... Please email me and i can ask you my question.. I will appreciate you.. Thanks a lot.... Java J2EE Interview Question & Answers Hi, Can anyone send me the Frequetly asked Java & J2EE Interview Question & answers links. Regards, Vijay Good Question I am very thank full to you people, for the really good question........ Encapsulation concept are good and understandable java in lynux re/sir myself is dinesh kumar, I want to know that how can run java programming in lynux operating system. thanking you software iam not understanding how should i get interst in projects,plz guide me About Garbage Collection Garbage collection is automatic as well as it can be forced. Suppose in your application you want to collect garbage before the collector do it periodically which you may require, then the procedure is this. Runtime runtime = Runtime.getRuntime abstraction some one tell me about abstraction of object in java,in abstraction if two object denote same object it means (they have same instance variable and mathod also same or not).i am confused ,will u help me thanks rahul Can any one know Is there a source for CORE JAVA objective typr questiones? Question are good but limited Questions are very good and answers are also very good easy to understand. But number of question are not sufficient. Should number of questions. hi friends This is prabalya doing my b-tech 3rd yr recently i have joined for java i need some more good interview questions & answers related to core java please mail to my id Thanks Thanks for providing valuable questions. Vishal Rastogi dear sir i am chanchal arora.i want to become a Java progarmmer.i am learing java. Please send the interview questions on Java.and any material related to Java to my mail id. Core Java Excellect Interview Questions. B.tech learnig Java Dear sir, i am praveen kumar pogu.i want to become a Java progarmmer.i am learing java. Please send the interview questions on Java.and any material related to Java to my mail id. Thank you sir, core java could u explain wrapper classes in detail? at which situations can we use ? hello sir i m harsha i have just started java classess in niit can u plzzzz send informations related to core java and advance java so that it becomes easy for me to understand the concepts of java and somes examples of easy coding for frames and others Garbage collection For forced garbage collection we can also do: System.gc(); Interview Question Pla send me a lots of interview question bcz i m working as a trainee in soft com. but i want to boost my knowledge for MNC,s , i wait ur site response java Respected sir, I want to become a javaprogrammer.Iam requesting u to give ur valuable guidance and java interview question in every area and send related material to my mail ID.please sir. thanking you Needs contents and Coding Basics Dear Sir, I have joined a company Databorough India Private Ltd. recently as a Java developer in Lucknow. I am the trainee over here for 6 months.Please provide me the Java Contents and Study material and guide me that should i join any co Good I want to become a good programmer in java. so please guide me to become a perfect programmer in java by posing some questions on JAVA. frnd Hi friends i hv joined my java/j2ee course but i need ur advise that which book is more preferable requesting java Interview Questions Dear Sir, my name is venkat, im looking for a Software job in java/j2ee platform.Plz,send me related material regarding hte Interview Questions & answers on core java ,Servlets,JSP,JDBC. questions java questions importent interview qustions bye Query can some one please give more information on threads & Garbage Collection. Please give guaidance in java Hai.. Sir Iam Praveen kumar ur java questions are very interseting and very usefull for me because i want become a very good programmer in java, and also ur questions are very usefull in interview also so please give me guaidance in java to get a job java Questions sir i want thread question. ok thanks Core Java Related Information Respected sir, I am hema intrested in java programming.I request u give me ur valuable guaidance and java interview question and send related material to my mail ID. thanking you Hello Hello sir i am studying in comp.engineer.i want to become a java prog. so pls sir send me all the details abt java on my mail regularly. java training hello i have gone thru ur questions , i have lit bit idea abt all of them i want to go training in java/j2ee,does it takes more time to start from beginning (base) give a reply to my mail id thanks Java Important Interview Questions Dear sir , Pl. send me important question in my ID related to all topics of java so that I can become a Java Professional. Thanking You. Hello Respected sir, I want to become a javaprogrammer.Iam requesting u to give ur valuable guidance and java interview question in every area and send related material to my mail ID.please sir. Thanking you please send me more questions it is a very good excercise for fresher who are preparing for sun exam . Thanks Dear Sir/madam, Thanks for providing sach a usefill things. questions please sand good questions about java. It is very importent Suggetion for every one Respected Sir, Pl Send me importent Interview Quetions related to Java programming Language. If any importent Notes also available near you then it will also send me sir. Thank you core java Hello Sir/Mam i want to become a java programmer,and material provided by you is very important for interview pupose.please send some more question on my id. your regards, Vivek Harinkhere Regarding integerview Sir Pls send ur interview question to ezhilvannan_88@yahoo.co.in or kams2676@rediffmail.com thank u and waiting for ur mail advance java i am srikanth i want a advance java &sql good quesation & answare more please sir JAVA beginer Respected sir, I want to become a javaprogrammer.Iam requesting u to give ur valuable guidance and java interview question in every area and send related material to my mail ID.please sir. thanking you good for the sutdents this very good site for the java bigner. hi it is very useful for me. thanks for u r site Regarding the question hello sir/madam, The question you have provided are really helpful and have lots of information Regards Ashish Thakur Guide Me Hi I am working in Java/J2EE technology since last 2.5 years but can't get the environment in that one should help others to teach.So I have got all the knowledge myself.But right now I want to switch for big company but whenever I give interview in java interview questioms hello sir/mam I am happy with the things you have provided but i want some more so please send it to me thanking you sangram shinde Regarding doubts HI, I know the core java lightly,i need good book for core java please give me the details. regarding interview questions helping me a lot. so,please suggest me the good book for corejava Nice Questions These all are the filtered questions on core java ,it is very helpfull.But what i personally thinks if you add more questions in this then it will more benefitial for the readers........ Thank you very much request Respected sir, my name is suraj trying to make a java programmer.I request u give me ur valuable guaidance and java interview question and send related material to my mail ID. thanking you JAVA Thanks to u people that u r really helpful to get knowledge about the interview questions related to java keep giving updated knowlegde about the same thanks Interview questions hi i am krishna looking for software job i want know about the core java interview questions and answers sir please can u send regarding interview Q & A Thanking u sending the questations Please send me the java interview questions. Tibco Can you suggest me to do Tibco courses in Chennai and some authour book for Tibco software and send all the questions and answers for Tibco for my mail id. Tibco please send me some useful information about Tibco software java question sir i want to become a s/w developer plz send me all java question and their ans thanking u hello sir i m doing MCA...i want to be a good java programmer...plz send me the sufficient details on my id hi sir, Hi sir, i harikrishna resident of warangal . please send me java material to my mail such that i can develop my knowledge in java Thanking you, i want job hai sir, i am doing MCA...i want to be a good java programmer...plz send me the sufficient details on my id. java Questions Dear sir Iam learning java pls send me core java information and interview questions Req. hole java interview question with answer please send me on my mail"devmits2002@gmail.com Java Questions Dear Sir, Please send me the java interview questions. what is core java how to write core java prog java interview quoestions just send me the queostions mainly on core java Abt all java questions asked in interview Sir,i have done the mca as well as adv. java course and searching the job.I want all the java questions that are commonly asked in interview,so pls send me the same. Thanks, pravin Asking for Java & J2EE related Interview Questions Dear sir, I am working as a Trainee in a Software company.I am visiting your website regularly and also i have been impressed by that site.So i want Java and J2EE Tutorials to improve my knowledge in the trinee periods.i hope that it will help wh Detail information regarding inerview question hi, Its very glad to know more about java interview question if it can be more understandable & to know how to implement each & every topic i.e how this different class & finction can used in programming(for ex. polimorphism, inheritance, encaps request dear sir, i am sailaja please send recent and frequently asked interview questions(corejava) to my nail id. thanking you sir. java iam learning java pls send me core java information and interview questions interview question I am persuing MCA.i am learning Java pls send me core java related information and latest interview questions hi cannot make a static reference to the non-static method getClass() from the type Object this kind of error is occure in ur given example so please correct it by creting the object of class Test Hi Hello sir. please send me leates interview qwestions to my mail-id. Regards, Vikas gud could have been better, if few topics were briefly described. Java/j2ee I Need some real time problem in java with ide. Awesome Awesome Tutorial corejavainterview question pls sent java interview question program please send me java program to display stars in an equilateral triangle form. HI Plaese put some more questions i need java and j2ee interview questions i need java and j2ee interview questions to my mail.my mail is enetered already.so please send it java Differnce between swing and struts core java interview question Dear sir i am shailendra jain.i have completed in B.E in 2007 in industrial & production branch.so i am preparing in interviewbecause i am seaching the job. Mr. PLEASE SEND ME DAILY LATEST QUESTION ON JAVA/J2EE/JSP,DHTML. WARM REGARDS DHARMENDRA JAIN Interiew Questions please send me latest interview questions to my mail-id. questions Hello sir, PLease send some important questions on JDBC.As I am preparing INterview debugging questions i want debbuging questions for java interview and practise.please give me some interview questions... seminar dear sir please give me some important points which i must be discuss in my seminar within core java. interview question(java/j2ee) Please send me the java/j2ee interview questions interview objective questions Hello sir, I am working in java/j2ee technology i am visiting your site daily please send me latest interview question and answer in my mail id. Request for latest interview QA for java/j2ee Hello sir, I am working in java/j2ee technology i am visiting your site daily please send me latest interview question and answer in my mail id sir. with Thanks and regards, S.Saravanan hello hello sir, I am a student of IT. I visited your site. I really like questions but please send me some new questions on my ID dear sir hi, i want latest interview qwestions. pls send to my mail-id interview questions sir please send me technical interview ques to my mail id Hello Sir, i want latest java/j2ee interview qwestions. pls send to my mail-id Example for finalize() Sir.., I am regulary visit ur website..it is very usefull for me. now i am working in java/j2ee. i want the clear example about finalize(). Thanks & regards Manigandan G need java,j2ee intw questions hi, i reularly uses ur siite i need latest and impotant intw qustions for java/j2ee plz it's urgent plz send to mymail. A polite request by a student goodevening sir. i am manish. rigtnow i am my MCA final year.i want to work in java technology. plz send latest interview question of core java on my mail id. i will be always grateful for u. thanks. java Hello sir, I am working in java/j2ee technology i am visiting your site daily please send me latest interview question and answer in my mail id sir. with Thanks and regards, your's Madhu hi Hello sir, I am working in java/j2me technology i am visiting your site daily please send me latest interview question and answer in my mail id sir. with Thanks and regards, manish bansal request for interview questions Hello Sir, I am a fresher,have completed my graduation in I.T with First Class.Now I am preparing myself for interviews.Si please send interview questions on java on my ID.Thank You S/w Hello sir, I am working in java/j2ee technology i am visiting your site daily please send me latest interview question and answer in my mail id. Thank you S/w Hello sir, I am working in java/j2ee technology i am visiting your site daily please send me latest interview question and answer in my mail id as well as i want to basic difference between final,finalize and finally with some small co good excellent stuff. I recomand every one should read this. MSc freser plese send me letest core java , J2EE , JSP , SERVLET , struts ,and hibernate Questions plese required very urgently from shyam sakalley Dear friend Please send some important questions on java ,computional thinking and dataStructures. now I'm preparing for the ITJob java jinterview question hello sir, plzzz send important question on jsp,servlet & core.i am preparing for interview. send me java materials Hello, I read many of the questions I liked those very much. I am new student of java plz guide me. Need More Java and J2EE questions The questions are really helpful. Please send me more questions. Arindam interview questions hi... can u send the faqs with answers...... i need it urgent .... thanks rex Interview questions Dear sir, I am satisfied by reading these questions and ans .Please send me the latest interview questions in daily basis plz send interview faqs hi sir, i have read ur questions.its fine i want to java faqs on corejava and j2ee.plz send to me hin can u send briefly about design pattern which r using in real time environment core java question Hi This is deepak plz send me basic core java interview question. IIQ'S I wantjava faqs on corejava and j2ee.plz send to my Email jagadeshnaidu@yahoo.co.in java,j2ee,struts,Hibernate please send me technical interview ques (java,j2ee,struts,Hibernate) to my mail id java interview qustion please send me leates java interview questions to my mail-id. sort guation in core java pls send all core java related question Java/J2ee Questions Dear Sir / Mam, Pls send daily mails about java /J2ee important interview questions to me. I am searching job in this department. I am fresher. Pls. help me java tuition hi, i am a new reader of java,so please guide me to aquire some knowledge on java........................... Interview Questions Can u plz send me interview questions of core java java FAQs please send a perfect link for java FAQs with ans. helpme Hi im doing course on java but im not perfect at writting programs pls send me some examples of every topic hello can i have url of coreprograms completly Great Good set of interview q's. Really thought provoking q's. having more such q's would be appreciated. excellent this is greate combination of all those meterials special for new learner whose is as a new in java concept. interview questions hai sir plz send important programs about servlets Problem in accessing questions your collestion is good, but u need to add more questions based on advance java & creat a link to find it quickly. J2EE help Sir/Madam, I am a learner of J2EE basically on JSP & Struts framework.So I need your guide thoroughly. Please help me by providing materials & good example. java interview q&a send me some more questions and answer to my reference JAVA Respected SIR/Madam, I am Fresher in JAVA.Please send me about CORE JAVA and ADVANCED JAVA questions with answers. Thanking you Your's core java questions in this site good questions are given on core java,thanks for the entire team which assembled these questions,please send me some questions on methods fresher helo,iam fresher i wnat the quetions with answers on basics java concepts that is core java core java hi, i am srinivas i need code based objective type questions with answers new reader Respected sir/madam, i want to learn about coreJava. i m a new reader of Java.Can u send me some basics of Java...really this website useful for us........ Thank u sir/mada hello, pls send me to java Faq's and servlet ,jsp,struts faq's and books Thanking you, threads what is use of the threads ,i need some exmaples in real time applications core jave hello sir/mam, i want u to send me the solved question papers for codes of programming in java... java interview questions Quary Sir, I am fresh for JAVA, Kindly intimate me the easy way to learn the JAVA and core JAVA Recruiter Can i undersatand the difference between Java and Core Java? also what are the tools involved in Core Java? searching a job sir i am final year mca student doing project in six sem and then i learn java,j2EE in niit bangalore any job oppening please inform me ,i read the your question&anser it is very help thank you sir java interview questions please send me java and j2ee and jsp questions and answers java interview questions answers please send me java interview questions answers java questions i want multiple choice questions for scjp entrance test and also answers core java very good in reading these concepts but i need some FAQ's in interviews.... pls Dear sir, pls send me some strutus question Thanks & regards vijay how to create jsp in servlet. how to create jsp in servlet. I know to accomplish this we need to extends our servlet with JspBasePage class but now sure completly. java interview question -programming based Dear sir I have seen your postings and questions ,they are really very benificial for freshers.also please send some of the question regarding the programming concept. Regards vivek pandey Hello sir/mam, i am learning java,can u plz post me inerview questions regarding core java, adv java and struts. Java question Dear sir, I want some good question that will help me in oracle interview. about java what the actual difference between system.out.print() and system.out.println?And give one example? Interview Question Dear Sir, Please send interview based questions and small programs for my reference. java sir i am learner in java & j2ee so pls send me d questions and guide me java What we will call button in java? request hi, i am learning java,can u plz post me inerview questions regarding core java,struts.thanks for the site corejava questions interview questions Sir/Madam, please send me the FAQ in java interviews nice information Hi, really the information provided by this website is valuable and helpful to understand the concepts Good books for Sun Certified Web Component Hello sir/mam, I need one good book for sun Certified web component certification, what are all websites can help for this certification exam please reply me. Thankyou swarnalatha needed questions and answers hi dis is madhuri. can u send me d core java, advanced java, j2ee and jsp questions with answers. objective type questins please send me the objective questions of java/j2ee is urgent neede somthing hi,can u send me some importent programs and FAQ's in core java and advanced. abstraction hi, please send me an example of abstaction (rather than color eg)with code?i faced this question .. thanks suria hi send me some interview questions on core java core java send me some interview questions on core java Plz send scrpit programms I Know some basics in script,but i don't know how to write the programm. so lz send script programms. Oracle Objective Questions and Answers Send me objective question and answer related tocorejava please post some notes in core java and programes i like java i want to study java so plse send me some notes about core java and some programes to it java quuestions i am doin g core jav , will u plz forward me the questions asked in exam (objective questions only) corejava & j2ee i neet java & j2ee materials Java interview can u plzz send me the tricky questions ,that can be asked in java interview? Thanking in anticipation.... information PLZ send me the All quation java questions i want to know the complete details of java with examples could any one suggest and how java is used in real applications java and .net please send me good multiple choice questions of core,advanced java and c#,asp.net and vb.net can u send me d core java, advance java ,servlets can u send me d core java, advanced java, j2ee,servlet,jdbc and jsp questions with answers topicwise urgently. core java Hi, This is very useful for candidates for preparing for their interviews. Regards. java help me how to install java & how to verify it.. i want the tutorials of java` hello,i want some good tutorials of core java,advanced java,j2ee,jsp,servlets and also some tutorials based on dotnet.with an easy to understand process and the best tutorial plzzzzzzz. i hope u wil send me the tutorials of all these categories and pls send sir, i am new learner of advanced java. I have a little bit knowledge about oops concepts. could you send the importance of oops concepts. I have a small doubt for publishing web pages on the INTERNET we have HTML, what is the need of JAVA. exact answer Question: Read the following program: public class test { public static void main(String [] args) { int x = 3; int y = 1; if (x = y) System.out.println("Not equal"); else System.out.println("Equal"); } } Answ SEND MORE QUESTIONS ON JAVA PLEASE SEND ME SOME MORE QUESTIONS ,YOUR QUESTIONS ARE VERY EFFECITVE & INTERESTING. YOURS FAITHFULLY... how to use netbeans pl send me the detail "how to use netbeans" in java project (a project written in java,and database is oracle). core java meterial i want to core java meterial stuts notes hi, send me notes aobout struts with some examples. I dont know anything abt strutsst objectives questions for core java i want to be strong in java Tricky question Hi This material is so nice but so simple, I request you to send me some tricky question in core java. Thanks... good material Though material is good but please upgrade it becuse now java 7 is coming?? Please upgrade Hi, The material is very good. but i request u to please upgrade it. like JDBC is still2.0. But now 4 is coming. java interview questions....... hi frndz,i'm quite impressed with the above questions but i still need some more questions based on it.can anyone tell me what kind of questions will be asked in infotech enterprise,hyderabad?....bcoz my final round is yet to happen. debugging for java debugging and interview questions debugging for java debugginf and interview questions question related to JDBC what should be the length of string that is permitted in one query java interview questions hai friends any one of the friend plese send java interview questions to help my carrier java send me the java debugging qusetions Thank you Thank you for good interview preparation questions. But very poor english and needs lot of more questions Transient Variable Example class Employee implements Serializable { String name; int age; transient long salary; } When executing this program, the employees name and age will be stored into a an outputstream but the salary will not be stored. OVERLOADIING AND OVERRIDING CAN YOU SEND ME TO EXAMPLES OF OVERRIDING,OVERLOADDING concept of thread thread java interview questions excellent stuff to understand more clearly Missing return type Function is not written like this: public myFunction (){ Chandan java interview questions thanx this is just the perfect site to clear all doubts n 100% chance to get selected.thanx. checked exception can we create user defined checked exception. I Need Core Java Materials.......... Pls Send Me, Hi sir, I Need Core Java Materials.......... Pls Send Me, Interviewquestions Corejava regard's jo$ transient variable tell the types of trasient variable hi goodmorning. Hi sir i need more material for java. this notes is Execallent but still i need more information about java packages. core java meterial I want core java meterial with simpe examples Java I want to be strong in java and i want to learn more about jdk1.5. Java Hi, I want to learn more about java1.5 so plz send me the material regarding java1.5 java Question please send internalworking of System.out.print(); what is the need of System,out.print transient Transient variables cannot be serialized. The fields marked transient in a serializable object will not be transmitted in the byte stream. An example would be a file handle or a database connection. Such objects are only meaningful locally. So they s objective question plz send me oracle objective question..... oracle database please give me objective type quetion quary hi this is nice approch questions are good core java material nice cnu pls send core java meterial Nice Hi, This is Subir The question s good but i need more faq questions asked by interviewers. Thanks hashmap i want to know more about hash map.Related qns and examples are required. Java the explanations of the java related questions are superb.... Very Nice Hi!This is very useful to me,while preparing a viva and interview purpose.If u send these to mail id it is more better. java core questions it was very site to provide java que. and very helpful in very easy lang. to describe. Anyone can easily understand this language. require some core java programs along with questi can u help me in getting some complex core java programs with their questions.It would be greatful if i get some mini java projects question Respected Sir, I m very happy for question given by u. Sir i want some output type question with answer. plz send to me. Question is wrong Question: How you can force the garbage collection? Answer: Garbage collection automatic process and can't be forced. not true, //force garbage collection... System.gc(); Core Java Simply super java q? why we use refrece to superclass objest ? why we use refreance variable of superclass object ? Core Java Interview Questions appreciable!!!!!! corejava these are nice answers Java Certificate Sir, i need some material relevant for java certificate comment what is default access specifier .it is default = package or not JSP Plz forward jsp complete beginer material objectives question and answer on the oracle no Good question - answers this are good questions and also answered well core java I went to core java perfectly encapsulation encapsulatoin java interview questions hai this is kumar. please some im[ortant interview question of java core j2ee,applet, thead,servlet, webtechnology, jdbc etc. project please give topic on project. and how can design graphics in java. interview question respected Sir....... please send me as possible as core java interview question ......... Transient Keyword in Java Can u give an example for transient variable in java Applet and awt The iterview is very good but i request to u that give us some example of applet and awt. java java objective questions A nice set of questions tutorial was good.Reasoning excellent. Like to see more like these. oops compleate disscription of oops java material hi sir i want full java material core and advance and j2ee (with quastions and answers ) corejava interview questions it is ok java i want to know about 1) how many packs use in java 2)what is program sequance java iterview good qusetion and answer and goods thinks interest for all question good for all question. Very good website thid web site is ver helpful for interview preparation. feedback Hello All of you, This is very good interview questions. Can I get this type of questions with answers on my email id. Plz reply me. Regards & Thanks. core java material Hello there, Can you please send me core java material and interview questions. Core Java Interview Questions think the question Need help Hello, Could you please send me some thread related interview questions need question paper sir always i need this type of question paper. java these questions gives an idea to face interviews. commemt nice to read java concept Debugging Questions, Interviewquestions java I want debbuging questions for java interview and practise.please give me some interview questions... Java GOOD , U SEND MANY QUESTIONS RELATED TO JAVA FOR FREE OF COST . THANK U! about clear Debugg in java side about clear Debugg in java side and iwant to know types of debugger in java code INTERVIEW QUESTIONS VERY NICE AND BEAUTIFUL QUESTIONS java inteview questions ABOUT CORE JAVA core java material i need help to understand core java concepts file so good information for students thank you hi i read all questions.i got a horrible knowledge.thamks. core java I am very glad to have such a nice introduction in java hopefully i need the program with example which make me easier to understand thank you! java debugging questions and answers i want debugging questions and answers in java comm Hi sir, I Need Core Java Materials.......... Pls Send Me, Interviewquestions & answers on Corejava requesting for information on java hi sir,i wanna be more specialized in threads,streams,networking and "html & javascript" concepts please send the best information and interview questions regarding this Thankq so much for providing free information..... why java doesn'tsupport PROCES BASED MULTI TASKING please make it clear with example why a class can extend only one class ? but it will implement n no.of interfaces why is it so?......... and one more is see this example class A extends class B{ ------------------ --------------------- } we know that every class is a sub class of Object clas Good Question This website is good for java freshers. myopenion excellent.updata with any new questions thank u sir Project /Risk Manager The Questions and answers are very educative and refreshing hai please send me important question about java about core java please give me interview questions on java Java Hi Sir.... I want java materials & java questions.Please send me. Good questions! Suggesition These question are good for interview however there should be option to read question on topic based also.Thank You Regarding Oops. Thanks For giving information related to Oops.It's nice to project Oop features between various Oop languages. That can reflect Java Technology extensively my openion your information on questions with answers has been very useful me because .i am preparing to face the it interview so i kindly requet you to continue your best works like what you have given. and i am very happy to send my comment thank you. java interview questions hi this is s nice place to solve ur queries.... hello sir,this is priyanka questions are almost good but u should provide more questions from basics of java java interview question hello ,i need java interview questions depends on the syllabus of j2ee and j2se....if u have plz send me immediatelyyyy. Need to remove the compiling error System.out.println(Test.getClass().equals(test2.getClass())); //false remove the above line and add the below line to compile it. Because getClass() method is not a static. System.out.println(test1.getClass().equals(test2.getClass())); //false technical question hi, I want examples for threads,applets. and i want explanation about JAR files. i want technical question in java & c collection frame work explain i details with programmes java this questions are really nice. Commend and greetings hi ....i hope u r fine and i m really happy that u have added such useful questions...thanks abt question please kindly be focus on some output type question like u just give the out put type prog. abt question please kindly be focus on some output type question like just give the out put type prog. java Sir, I want java Interview questions & explanation about Anynomious class core java are they enough questions to prepare for an interview regarding core java? If there are more questions realted to core java then please send me. interface Vs abstract class explain about interface and abstract class notes i want core java notes and interview questions for java please send me CORE JAVA Java interview Questions corejava i want 2 know........... how 2 learn core java for a better understandable.........or in a easy manner...... why sunmicro choose the tea&cup simbel of java why sun micro choose the tea&cup simbel of java and explanation? Java Interview Questions I want corejava,servlet,jsp interview questions & also struts notes with interview questions. good questions i learn more with this site i like this site java please send the java,c and c++ debbuing questions to my mail overriding i want the exact use of overriding with real time example and explain breifly computers sir i want java,os,c,c++,oracle,uml.webtechnologies material Example error instanceof and getclass I am sending the corrected code: interface one{ } class Two implements one { } class Three implements one { } public class Testinstance { public static void main(String args[]) { Testinstance t1=new Testinstance(); one test1 = new Tw java important questions thanks for java important questions send me more important java interview questions and answers very helpful I used this site while preparing for an interview and I found the examples and explanations very helpful. Thanks! java it fantastic result for any java learner Wrong in the if statement block The code below uses a wrong operator to compare two integer values. It is uses an assignment operator which is = instead of == or equals keyword Below is the link from where the error came from. interview questions Hi Sir This is sudhakar.i am fresher.I am applied for jobs.please send importent topics and interview Questions Thank You i want source code in core java i want all program in core java please send me on my e-mail (pradeepumate@gmail.com OR milestone_pradeep@rediffmail.com) please send me related notes on my account pradeep umate java java basics Differents between jdk1.5 and jdk1.6 sir, I want know technical differents between jdk1.5 and jdk1.6 for my interview hello nice core java Question in this side Objective Questions answers send me objective questions answers JAVA sir, i want only core concepts and impartant questions only java interview question java interview question I want more details about interface&abstractaclass It is a Excellent website for java learners... Core and Advanced Java Please send me Core as well as Advanced Java materials. Please, core java tutorial,Advanced java This site is very nice to get good information,mainly way they presented is very nice and super. Regards Venkatesh.V 9700143277 java i need core java material thanks to google it is excellent book for learning java course at home and is usable for any student. about oops concepts is this same things between inheritance and interface. request send me some interview questions on java and .net Can u pls send interview materials about core java Hi, can u send interview materials on Core java, Java collections, C++ (on the whole). The above page seems quite useful... this is help to my job ............ java Very nice Question And Answere. java i want java access specifier detail with example and how to decide the path between two package and project in eclipse?plz send me detils about core java i want material of java and dbms Need a corejava meterial sir i want a corejava interview Questions please send me the URL for that...... corejava interview qustions method overloading method overriding with inhritence method overriding with java interface core java may i know the new version of java? java question java programiming java better question for solving query.........thank u nice This one is really a nice material for quick overview of java basics java i want chapter wise inteview questions on core java with examples Awesome This is awesome ! So helpful ! Thanks for sharing this ! keep up the good work buddy... java interview questions what are all the interview questions of java to my mail.please send the results immediately Regards The information cited is of much concern to me and helped me a lot as it is brief and correct.. I would request u to mail me more interview question other Oops languages ... Regards Is it possible to create button on click event in applet run by applet viewer?? Comment This tutorial is very helpful for our interview preparation.Thank u. java core sir i want to know that what the criteria of SCJP is?is it banefecial for us and important for the job? interview questions sir, plz send me interview questions of C, C++, oracle, & core java. Java This website is very useful for me..very thanks core java interview questions please send on my id core java These questions are really helpful to those who r looking for core java.. message to you this is very very help to the java people technology sir,i want core java notes and interview questions for java please send me... i want technical question in java & c... i want java,os,c,c++,oracle,uml.webtechnologies materials... why source file is alwys saved as .java extention? plzz help..............interviewer asked me in interview. versions i wana know about the history n versions of java student of mca final year i want more que. and ans.of java interview question? what is the deffrence between function and method ? and also give sutable example plz Core Java Please send me Core Java material with this Email ID. regarding java questions I offer my gratitude to the author/collecting this type of material.It is very useful and informative.it is a quick reference to the students. java material pls send java materials Regarding material for core java subject Dear sir, I want core java interview questions along with answers and in interviews they are asking basics of softwares like tomcat server versions, tools like eclipse,and also asking giving programs and make us to find errors so that i need program fine presentation please forword all the content of the cre java interview question........... corejava i want core java notes and interview questions for java please send me Java Datat ypes i read this but according my opinion you can get more defination with examples on this link: core java i want detail about applet and uses of applet more information java interview question nice site,,,,useful for new commers java questions sir please give me information about volatile variable? Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava . : How do I limit the scope of a file chooser? Ans : Generally FileFilter is used... paths, that can be created on the file system or can be used to test whether... between a data store and a database ? Ans : Generally the term data store is used Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava ; Q 2. What is hashcode? When is hashCode() used ? Ans : ... as the class is loaded, then the newInstance() method of the Class object is used... of a class. This method is often used while at the time of class loading Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava : Dynamic method dispatch is the mechanism that is used by Java runtime system... use interfaces to develop Java applications ? Ans : Interfaces should be used... is used only at those places where the user requires only one instance of a class corejava - Java Interview Questions singleton java implementation What is Singleton? And how can it get implemented in Java program? Singleton is used to create only one instance of an object that means there would be only one instance of an object corejava - Java Beginners , Many. A design patterns at least 250 existing are used in Object oriented world corejava CoreJava JavaScript method big() in the given example that we have used the method big() that will change the font... JavaScript method big() This section illustrates you the use of JavaScript method big(). The big What is Big Data? The Big data is a term used for the massive data set which very difficult.... This article is discussing about the Big data. An Example of Big Data... structured and most of the cases it is incomplete. By many Big Data
http://www.roseindia.net/tutorialhelp/allcomments/71
CC-MAIN-2013-48
refinedweb
11,721
63.8
StarkNet Alpha 0.7.0 Spoiler — massive version alert! TL;DR - StarkNet Alpha 0.7.0 released to Goerli; packed with improvements - Contracts can now be upgraded using the Proxy Upgrade Pattern - Contracts can now emit Events - Support for the long-awaited Block Number and Block Timestamp system calls Intro We are happy to release Alpha 0.7.0, a version packed with new features and improvements. One of the best stimulants to StarkNet over the last few months has been the increased involvement of the community in shaping StarkNet’s future. This version addresses some of the community’s burning needs. Changes to Naming Convention The observant reader might have noticed that the previous StarkNet Alpha release was named Alpha 4, whereas we are now releasing Alpha 0.7.0. We decided to omit the dedicated Alpha version number and rely instead only on the associated cairo-lang version. New Features Contract Upgradeability OpenZeppelin’s Proxy Upgrade Pattern is now fully supported for contract upgrades in StarkNet. The Proxy pattern is the common method to enable contract upgrades over Ethereum. Alpha 0.7.0 enables this pattern over StarkNet. We made a short tutorial to demonstrate a basic implementation of the pattern, and OpenZeppelin is already hard at work implementing a standard contract for the proxy pattern; see the prototype. Block Number and Block Timestamp Alpha 0.7.0 adds two new system calls that many devs have been asking for. These calls allow a contract to access the block number and the block timestamp. The block number returns the number of the current executed block. The block timestamp returns the timestamp given by the Sequencer at the creation of the block. You can see an example of how to use these features in the tutorial. Events Surprise! A feature that was planned for a future version has sneaked its way into this earlier one. StarkNet contracts now support defining and emitting events, allowing them to expose execution information for off-chain applications to consume. Ethereum developers will find the semantics and syntax very similar to Solidity. You can read the documentation, or follow the tutorial, that explains this feature. Removed %builtins Directive The %builtin directive is no longer needed in StarkNet contracts. This change followed a community discussion about the contract extensibility pattern on StarkNet Shamans. It significantly simplifies the usability of this extensibility pattern. For example, the following contract will be changed from: %lang starknet # This is the "%builtins" directive.# It is not needed anymore.%builtins range_check @viewfunc add(x : felt, y : felt) -> (res : felt):return (res=x + y)end To this: %lang starknet@viewfunc add(x : felt, y : felt) -> (res : felt):return (res=x + y)end You can check out the ERC-20 standard contracts, which use the new pattern. External Functions Support Arrays of Structs Alpha 0.7.0 supports passing and returning arrays of structs in external functions. This additional functionality allows Account Contracts to better support multicalls. Multicall is a powerful feature of Account Abstraction that allows an account to make multiple calls in a single transaction. An obvious use-case is that of creating a single transaction that calls allowance and then transferFrom. We look forward to seeing what the community does with it. Improvements to StarkNet CLI Support for Pending Blocks Pending Blocks were introduced in the last minor version (v0.6.2) and offered faster confirmations on transactions. This version includes support for querying those blocks via the StarkNet CLI. To use it, in every CLI command that takes block_number as an argument (contract_call/get_block/get_code/get_storage_at), we can query the StarkNet with respect to the pending block by specifying block_number=pending. Support for Account Contracts StarkNet uses account abstraction, i.e., all accounts are implemented as smart contracts. The first implementations of account contracts were done by Argent and OZ, but we expect many more to come. In StarkNet, all transactions must go through an account contract, and the CLI now allows interaction with StarkNet Alpha directly via account contracts. See the tutorial on how to set it up. Similar functionality was also added to StarkNet.py and to Nile in the last month. L1<>L2 Messaging in the Testing Framework Alpha 0.7.0 introduces the Postman. The Postman enables developers to use the testing framework to test more complicated flows. At a high level — it mocks the StarkNet Sequencer’s responsibility of passing messages from L1 to L2 and L2 to L1. It makes sure messages that are sent via the Solidity messaging contract will appear at the destination StarkNet contract and messages sent from a StarkNet contract will appear in the Solidity messaging contract. And More Features Alpha 0.7.0 provides many more features and changes, like the addition of an efficient square root function to the math common library. A full list appears in the changelog. Next Up? Initial Fee Mechanism support will be released in a matter of weeks, as a sub-version of StarkNet. More Information? starknet.io: for all StarkNet information, tutorials and updates. StarkNet Discord: join to get answers to your questions, get dev support and become a part of the community. StarkNet Shamans: join to follow (and participate!) in StarkNet research discussions.
https://medium.com/starkware/starknet-alpha-0-7-0-26e04db03509
CC-MAIN-2022-27
refinedweb
873
57.27
We had an old car tachometer laying around the labs (beats me why...) that seemed perfect for some type of project. But what? Since it is pretty common to see a car looking tachometer as a CPU usage meter on the desktop of a PC we decided to create a hardware version of this for the Raspberry PI. Basically what we are going to do is read the CPU usage on the Raspberry PI. Then convert that CPU usage value into a corresponding frequency. That frequency will be output via GPIO on the Raspberry PI pin 11 to drive the tachometer. ---- First step is to characterize the tachometer and find out what type of frequencies made the needle move. The Tekronix 3252C made easy work of that. ---- The spreadsheet below shows what frequencies move the tachometer needle to what position. We also used this spreadsheet to calculate a multiplier to adjust the CPU usage reading on the RaspPI to a 0 RMP to 8000 RPM reading on the car tachometer. It's a pretty simple calculation and you can see how it is used in the Python source code below. ----- The tachometer needs 12VDC to power it, but the input signal to move the tachometer needle needs to be 5VDC. A 7805 voltage regulator solves that problem. Also, to buffer the RasPI GPIO a 7404 was used to drive the RasPI signal into the tach. The connection looks like this: ---- Using some Python code the end result is this: In the video you can see how moving the mouse around on the Raspberry PI increases the CPU usage making the tachometer reading increase. When the Chromium web browser is launched the CPU usage goes to 100% and the tachometer needle 'redlines'. ----- The Python code is straightforward. Drop us a line if you decide to duplicate the build! # # Program makes a simple car tachometer read CPU usage percentage. # # WhiskeyTangoHotel.Com - December 2014 # # To run this program you must first do a one time install # of the following from the terminal commans line. # # sudo apt-get install python-pip python-dev # sudo pip install psutil # sudo apt-get install python-psutil # see: for PWM info import time import psutil import RPi.GPIO as GPIO GPIO.setmode(GPIO.BOARD) GPIO.setup(11, GPIO.OUT) # Pin 11 drives the tachometer #p = GPIO.PWM(channel, frequencyHz) p = GPIO.PWM(11, 200) # move the tach needle to about 1/2 scale #p.start(dc) # where dc is the duty cycle (0.0 <= dc <= 100.0) p.start(50) print ' ' print 'SELF TEST: Tach to about 1/2 scale for 5 seconds...' time.sleep(5) print ' ' i = 0 # just a counter adjust_cpu_to_hz = 3.8 # adjust_cpu_to_hz is calculated on the tach characterization spreadsheet. # Tektronix 3252C was used to feed input into the tach to create the # characterization spreadsheet. Discover how many Hz needed to move needle. while(True): # loop forever i = i + 1 # read the RasPI CPU Usage. Store the value in var cpu_usage. cpu_usage = psutil.cpu_percent() # use adjust_cpu_to_hz to scale the cpu_usage result to Hz within the tach's range p = GPIO.PWM(11, cpu_usage * adjust_cpu_to_hz) p.start(50) print 'Run #:', i, ' ', cpu_usage,'%', ' ', cpu_usage * adjust_cpu_to_hz,'Hz out to tach' time.sleep(3) # allow x secs to give the CPU a breather p.stop() # stop PWM (also end of while loop) input('Press return to stop:') p.stop() # stop PWM GPIO.cleanup() # Take out the trash ----
http://www.whiskeytangohotel.com/2014/12/raspberry-pi-cpu-usage-tachometer.html
CC-MAIN-2019-04
refinedweb
568
75.4
Scala Iterator indexOf() method with example The indexOf() method belongs to the concrete value members of the class Abstract Iterator. It is helpful in searching the values and then indicating their positions in the stated iterator. - Method Definition: def indexOf(elem: B): Int Where, elem is the element to be searched. - Return Type: It returns the index of the first occurrence of the element elem in the stated Scala iterator. Example : Output: 3 Here, the value 9 in the indexOf method is present in the third position of the iterator so, it returns three. Example : Output: -1 Here, the value stated in the method indexOf is not present in the iterator so, it returns -1. Note: If the value given in the indexOf method is not present in the stated iterator then this method will return -1. My Personal Notes arrow_drop_up
https://www.geeksforgeeks.org/scala-iterator-indexof-method-with-example/?ref=lbp
CC-MAIN-2021-25
refinedweb
141
53.71
SYNOPSISlxc-unshare {-s namespaces} [-u user] [-H hostname] [-i ifname] [-d] [-M] {command} DESCRIPTIONlxc-unshare can be used to run a task in a cloned set of namespaces. This command is mainly provided for testing purposes. Despite its name, it always uses clone rather than unshare to create the new task with fresh namespaces. Apart from testing kernel regressions this should make no difference. OPTIONS - -s".) - -u user - Specify a userid which the new task should become. - -H hostname - Set the hostname in the new container. Only allowed if the UTSNAME namespace is set. - -i interfacename - Move the named interface into the container. Only allowed if the NETWORK namespace is set. You may specify this argument multiple times to move multiple interfaces into container. - -d - Daemonize (do not wait for the container to exit before exiting) - -M - Mount default filesystems (/proc /dev/shm and /dev/mqueue) in the container. Only allowed if MOUNT namespace is set. EXAMPLESTo spawn a new shell with its own UTS (hostname) namespace, lxc-unshare -s UTSNAME /bin/bash If the hostname is changed in that shell, the change will not be reflected on the host. To spawn a shell in a new network, pid, and mount namespace, lxc-unshare -s "NETWORK|PID|MOUNT" /bin/bash The resulting shell will have pid 1 and will see no network interfaces. After re-mounting /proc in that shell, mount -t proc proc /proc ps output will show there are no other processes in the namespace. To spawn a shell in a new network, pid, mount, and hostname namespace. lxc-unshare -s "NETWORK|PID|MOUNT|UTSNAME" -M -H slave -i veth1 /bin/bash The resulting shell will have pid 1 and will see two network interfaces (lo and veth1). The hostname will be "slave" and /proc will have been remounted. ps output will show there are no other processes in the namespace. AUTHORDaniel Lezcano <[email protected]>
http://manpages.org/lxc-unshare
CC-MAIN-2020-10
refinedweb
317
64.61
On Thu, Feb 10, 2011 at 06:58:25PM +0100, Janne Grunau wrote: > on Thu, Feb 10, 2011 at 12:39:39PM +0000, M?ns Rullg?rd wrote: > @@ -1423,13 +1423,15 @@ static int dvbsub_decode(AVCodecContext *avctx, > > #endif > > - if (buf_size <= 2 || *buf != 0x0f) > + if (buf_size <= 6 || *buf != 0x0f) { > + av_dlog(avctx, "incomplete or broken packet"); I think it would be more consistent to use av_log in both cases, most codecs print a message after such "fatal" errors. > - while (p < p_end && *p == 0x0f) { > + while (p_end - p >= 6 && *p == 0x0f) { Just to be pedantic: while this is nicer, thanks to required padding your original version could not overflow, so wasn't actually wrong. But it's ok either way.
http://ffmpeg.org/pipermail/ffmpeg-devel/2011-February/103665.html
CC-MAIN-2016-36
refinedweb
116
70.84
How To Use the Python Square Root Function There are many situations in which you will want to find the square root of a function in a Python application. Fortunately, Python includes some very powerful functionality for calculating square roots. In this article, I will teach you how to use the Python square root function. I will also show you how to calculate square roots without this function, and how to calculate the square root of every element in an outside data structure. Table of Contents You can skip to any particular section of this Python tutorial using the links below: - What is a Square Root? - The Python Square Root Function - How To Calculate Square Root Without the Python Square Root Function - Final Thoughts What is a Square Root? It’s hard to understand the square root without understanding squares first. In mathematics, the square of a number is the value that is generated when you multiply a number by itself. Here are a few examples: - The square of 2 is 4, since 2 times 2 is 4 - The square of 4 is 16, since 4 times 4 is 16 - The square of 8 is 64, since 8 times 8 is 64 In Python, these values are easy to calculate using the ** operator, which is used to calculate exponents. The same examples that I used above are calculated using Python in the following code block. 2**2 #Returns 4 4**4 #Returns 16 8**8 #Returns 64 The square root function is almost like a backwards version of the square. It is the number that when multiplied against itself yields the square value. A few examples (using the same mathematical relationships from before) are below: - The square root of 4 is 2, since 2 times 2 is 4 - The square root of 16 is 4, since 4 times 4 is 16 - The square root of 64 is 8, since 8 times 8 is 64 These square roots were fairly easy to determine since they were small enough integers to be included in our elementary school times tables. For larger numbers, it can be very difficult to calculate square roots. Fortunately, the Python square root function exists to make our life easy here. We’ll learn about the Python square root function in the next section. Return to the Table of Contents The Python Square Root Function Python comes with a built-in math module that contains many useful functions. One of these functions is sqrt, which allows us to calculate square roots. To use this function, you must first import the math module into your application with the following command: import math Now that the math module has been imported, we can call the sqrt operator from the math module using the dot operator. As an example, here’s how you would use the sqrt function to compute the square root of 4: math.sqrt(4) #Returns 2.0 The sqrt operator works for numbers of any size, which is helpful. Here is an example of sqrt applied to a very large number: math.sqrt(68564654987654321654984.3215) Here is the output 261848534438.62222 Depending on the purpose of your Python program, you may want to import exclusively the sqrt function and not the entire math module. Here is how you modify your module import: from math import sqrt #instead of 'import math' Since you did not import the entire math module, you do not need to call the sqrt function from the math module using the dot operator. Instead, you can call the sqrt function directly, like this: sqrt(100) Here is the output: 10.0 Return to the Table of Contents Calculating the Square Root of Every Element in a Python List Let’s say you had a Python list containing numbers, and you wanted to calculate the square root of every element in the list. my_list = [1, 4, 9, 16] Can you use the sqrt function to do this? Let’s try it: from math import sqrt my_list = [1, 4, 9, 16] sqrt(my_list) Unfortunately, this will return ` TypeError` that looks like this: TypeError: must be real number, not list It is clear that the sqrt function is not designed to work with lists. It is still possible to calculate the square root of every element in a list. To do this, we will need to use a loop. Here’s what this loops like: from math import sqrt my_list = [1, 4, 9, 16] i=0 while i < len(my_list): my_list[i] = sqrt(my_list[i]) i += 1 If you were to print out the my_list list now, you would see that its values have each had the sqrt function applied to them. [1.0, 2.0, 3.0, 4.0] Return to the Table of Contents How To Calculate Square Root Without the Python Square Root Function It is possible to calculate Python square roots without using the sqrt function. This is because the square root function is the same as raising a number to the power of 0.5. We have already seen that the ** operator allows us to calculate exponents in Python. Here is how you could calculate the square root of 100 without using the sqrt function: 100**0.5 #Returns 10.0 To go back to our earlier example of computing the square root of the elements within a Python list, here is how you could refactor this code to avoid using the sqrt function: my_list = [1, 4, 9, 16] i=0 while i < len(my_list): my_list[i] = my_list[i]**0.5 i += 1 my_list Return to the Table of Contents Final Thoughts In this tutorial, you learned how to use sqrt, the Python square root function. I also explained the basics of square roots from a mathematical perspective and showed you how to calculate square roots in Python without the math module. I hope you enjoyed this tutorial. If you have any ideas or suggestions for future content, please email me!
https://nickmccullum.com/python-square-root-function/
CC-MAIN-2020-29
refinedweb
998
65.76
Steps to reproduce: var s = "a huge, huge, huge string..."; s = s.substring(0, 5); Expected results: s takes five bytes of memory, plus some overhead. Actual results: s takes a huge, huge, huge amount of memory. Unfortunately, most String functions use substring() or no-ops internally: concatenating with empty string, trim(), slice(), match(), search(), replace() with no match, split(), substr(), substring(), toString(), trim(), valueOf(). My workaround is: function unleakString(s) { return (' ' + s).substr(1); } But it's not satisfying, because it breaks an abstraction and forces me to think about memory allocation. Perhaps there should be some special logic for handling substring() when the output string length is less than, say, 0.125 times the input string length? that's not perfect, but in my opinion, the pros of this solution outweigh the cons. This crops up constantly when scraping HTML. Another approach not needing any kind of ad-hoc copy heuristic is to teach the garbage collector about projection functions. This is a well-known technique described e.g. in. I briefly glanced the paper. It's not as easy: - replacing a sliced string by a sequential string with same content usually uses up more memory - in return, releasing the backing store of the sliced string frees up memory It's really hard to tell whether flattening the sliced string is worthwhile or not, especially since we don't have a good way to track what strings refer to the same backing store. We had an attempt to flatten sliced strings when GC happens, two years ago, but was rejected due to the fact that it could cause an explosion on GC. The point here is: On one hand, our sliced strings are an optimization for one kind of use cases (probably chosen from one of the "great" benchmarks we care about), on the other hand, sliced strings in their current naive form are a source of unlimited(!) space leaks for other use cases. So the question is: Can we mitigate the latter problem at least a bit? While a general solution might involve quite some work, solving easier cases might be relatively straightforward, e.g. when a single sliced string is the only reason for keeping another string alive. We are not the first ones encountering this kind of problem, so there should be a variety of solutions already out there. Let's keep this issue open. Sven, assigning to you since you suggested to keep the issue open :) For an easy way to repro, see and Guys, people are starting to propose hacks as workaround for this bug :( This is affecting the most popular JavaScript library for drawing 3D graphics on the web. Per the above links, could this please be given some more thought? Thanks.
https://bugs.chromium.org/p/v8/issues/detail?id=2869
CC-MAIN-2017-13
refinedweb
460
69.62
The Widget Construction Kit Fredrik Lundh The Widget Construction Kit (WCK) is an extension API that allows you to implement all sorts of custom widgets, in pure Python. Creating a new widget can be as simple as: from WCK import Widget class HelloWidget(Widget): def ui_handle_repair(self, draw, x0, y0, x1, y1): font = self.ui_font("black", "times") draw.text((0, 0), "hello, world!", font) The Tkinter 3000 implementation of the WCK is designed to work with the existing Tkinter layer, as well as the upcoming Tkinter 3000 interface layer. The WCK is based on PythonWare’s uiToolkit’s extension API, and is designed to let you run new widgets under other toolkits as well. (for example, the effnews RSS reader for Windows uses a WCK implementation built on top of Windows’ native API.) For more information, see this page.
http://www.effbot.org/zone/wck-index.htm
crawl-001
refinedweb
139
54.83
A package to do everything from getting tweets to pre-processing Project description Tweetl By using Tweetl, you can simplify the steps from getting tweets to pre-processing them. If you don't have twitter API key, you can get it here. This package help you to ・・・ - get tweets with the target name and any keywords. - pre-processes the following list. - remove hashtags, URLs, pictographs, mentions, image strings and RT. - unify characters (uppercase to lowercase, halfwidth forms to fullwidth forms). - replace number to zero. - remove duplicates (because they might be RT.) Installation pip install Tweetl Usage Getting Tweets Create an instance of the 'GetTweet' Class. import Tweetl # your api keys consumer_api_key = "xxxxxxxxx" consumer_api_secret_key = "xxxxxxxxx" access_token = "xxxxxxxxx" access_token_secret = "xxxxxxxxx" # create an instance tweet_getter = Tweetl.GetTweet( consumer_api_key, consumer_api_secret_key, access_token, access_token_secret ) With target name You can collect tweets of the target if you use 'get_tweets_target' method and set the target's name not inclueded '@'. Then it returns collected tweets as DataFrame type. And you can specify the number of tweets. # get 1000 tweets of @Deepblue_ts df_target = tweet_getter.get_tweets_target("Deepblue_ts", 1000) df_target.head() With any keywords You can also get tweets about any keywords if you use 'get_tweets_keyword' method and set any one. And you can specify the number of tweets. # get 1000 tweets about 'deep learning' df_keyword = tweet_getter.get_tweets_keyword("deep learning", 1000) Cleansing Tweets Create an instance of the 'CleansingTweets' Class. And using 'cleansing_df' method, you can pre-processing tweets. You can select columns that you want to cleanse. The default is only text-colmn. # create an instance tweet_cleanser = Tweetl.CleansingTweets() cols = ["text", "user_description"] df_clean = tweet_cleanser.cleansing_df(df_keyword, subset_cols=cols) Author License This software is released under the MIT License, see LICENSE. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/Tweetl/
CC-MAIN-2022-40
refinedweb
302
60.82
/* Operating system specific defines to be used when targeting GCC for some generic System V Release 4 system. Copyright (C) 1991, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001 Free Software Foundation, Inc. Contributed by Ron Guilmette (rfg@monkeys. To use this file, make up a line like that in config.gcc: tm_file="$tm_file elfos.h svr4.h MACHINE/svr4.h" where MACHINE is replaced by the name of the basic hardware that you are targeting for. Then, in the file MACHINE/svr4.h, put any really system-specific defines (or overrides of defines) which you find that you need. */ /* Define a symbol indicating that we are using svr4.h. */ #define USING_SVR4_H /* Cpp, assembler, linker, library, and startfile spec's. */ /* This defines which switch letters take arguments. On svr4, most of the normal cases (defined in gcc.c) apply, and we also have -h* and -z* options (for the linker). Note however that there is no such thing as a -T option for svr4. */ #undef SWITCH_TAKES_ARG #define SWITCH_TAKES_ARG(CHAR) \ (DEFAULT_SWITCH_TAKES_ARG (CHAR) \ || (CHAR) == 'h' \ || (CHAR) == 'x' \ || (CHAR) == 'z') /* This defines which multi-letter switches take arguments. On svr4, there are no such switches except those implemented by GCC itself. */ #define WORD_SWITCH_TAKES_ARG(STR) \ (DEFAULT_WORD_SWITCH_TAKES_ARG (STR) \ && strcmp (STR, "Tdata") && strcmp (STR, "Ttext") \ && strcmp (STR, "Tbss")) /* Provide an ASM_SPEC appropriate for svr4. Here we try to support as many of the specialized svr4 assembler options as seems reasonable, given that there are certain options which we can't (or shouldn't) support directly due to the fact that they conflict with other options for other svr4 tools (e.g. ld) or with other options for GCC itself. For example, we don't support the -o (output file) or -R (remove input file) options because GCC already handles these things. We also don't support the -m (run m4) option for the assembler because that conflicts with the -m (produce load map) option of the svr4 linker. We do however allow passing arbitrary options to the svr4 assembler via the -Wa, option. Note that gcc doesn't allow a space to follow -Y in a -Ym,* or -Yd,* option. The svr4 assembler wants '-' on the command line if it's expected to read its stdin. */ #undef ASM_SPEC #define ASM_SPEC \ "%{v:-V} %{Qy:} %{!Qn:-Qy} %{n} %{T} %{Ym,*} %{Yd,*} %{Wa,*:%*}" #define AS_NEEDS_DASH_FOR_PIPED_INPUT /* Under svr4, the normal location of the `ld' and `as' programs is the /usr/ccs/bin directory. */ /* APPLE LOCAL begin mainline 4.3 2006-12-13 CROSS_DIRECTORY_STRUCTURE 4697325 */ #ifndef CROSS_DIRECTORY_STRUCTURE /* APPLE LOCAL end mainline 4.3 2006-12-13 CROSS_DIRECTORY_STRUCTURE 4697325 */ #undef MD_EXEC_PREFIX #define MD_EXEC_PREFIX "/usr/ccs/bin/" #endif /* Under svr4, the normal location of the various *crt*.o files is the /usr/ccs/lib directory. */ /* APPLE LOCAL begin mainline 4.3 2006-12-13 CROSS_DIRECTORY_STRUCTURE 4697325 */ #ifndef CROSS_DIRECTORY_STRUCTURE /* APPLE LOCAL end mainline 4.3 2006-12-13 CROSS_DIRECTORY_STRUCTURE 4697325 */ #undef MD_STARTFILE_PREFIX #define MD_STARTFILE_PREFIX "/usr/ccs/lib/" #endif /* Provide a LIB_SPEC appropriate for svr4. Here we tack on the default standard C library (unless we are building a shared library). */ #undef LIB_SPEC #define LIB_SPEC "%{!shared:%{!symbolic:-lc}}" /* Provide an ENDFILE_SPEC appropriate for svr4. Here we tack on our own magical crtend.o file (see crtstuff.c) which provides part of the support for getting C++ file-scope static object constructed before entering `main', followed by the normal svr3/svr4 "finalizer" file, which is either `gcrtn.o' or `crtn.o'. */ #undef ENDFILE_SPEC #define ENDFILE_SPEC "crtend.o%s %{pg:gcrtn.o%s}%{!pg:crtn.o%s}" /* Provide a LINK_SPEC appropriate for svr4. Here we provide support for the special GCC options -static, -shared, and -symbolic which allow us to link things in one of these three modes by applying the appropriate combinations of options at link-time. We also provide support here for as many of the other svr4 linker options as seems reasonable, given that some of them conflict with options for other svr4 tools (e.g. the assembler). In particular, we do support the -z*, -V, -b, -t, -Qy, -Qn, and -YP* options here, and the -e*, -l*, -o*, -r, -s, -u*, and -L* options are directly supported by gcc.c itself. We don't directly support the -m (generate load map) option because that conflicts with the -m (run m4) option of the svr4 assembler. We also don't directly support the svr4 linker's -I* or -M* options because these conflict with existing GCC options. We do however allow passing arbitrary options to the svr4 linker via the -Wl, option, in gcc.c. We don't support the svr4 linker's -a option at all because it is totally useless and because it conflicts with GCC's own -a option. Note that gcc doesn't allow a space to follow -Y in a -YP,* option. When the -G link option is used (-shared and -symbolic) a final link is not being done. */ #undef LINK_SPEC /* APPLE LOCAL begin mainline 4.3 2006-12-13 CROSS_DIRECTORY_STRUCTURE 4697325 */ #ifdef CROSS_DIRECTORY_STRUCTURE /* APPLE LOCAL end mainline 4.3 2006-12-13 CROSS_DIRECTORY_STRUCTURE 4697325 */ #define LINK_SPEC "%{h*} %{v:-V} \ %{b} \ %{static:-dn -Bstatic} \ %{shared:-G -dy -z text} \ %{symbolic:-Bsymbolic -G -dy -z text} \ %{G:-G} \ %{YP,*} \ %{Qy:} %{!Qn:-Qy}" #else #define LINK_SPEC "%{h*} %{v:-V} \ %{b} \ %{static:-dn -Bstatic} \ %{shared:-G -dy -z text} \ %{symbolic:-Bsymbolic -G -dy -z text} \ %{G:-G} \ %{YP,*} \ %{!YP,*:%{p:-Y P,/usr/ccs/lib/libp:/usr/lib/libp:/usr/ccs/lib:/usr/lib} \ %{!p:-Y P,/usr/ccs/lib:/usr/lib}} \ %{Qy:} %{!Qn:-Qy}" #endif /* Gcc automatically adds in one of the files /usr/ccs/lib/values-Xc.o or /usr/ccs/lib/values-Xa.o for each final link step (depending upon the other gcc options selected, such as -ansi). These files each contain one (initialized) copy of a special variable called `_lib_version'. Each one of these files has `_lib_version' initialized to a different (enum) value. The SVR4 library routines query the value of `_lib_version' at run to decide how they should behave. Specifically, they decide (based upon the value of `_lib_version') if they will act in a strictly ANSI conforming manner or not. */ #undef STARTFILE_SPEC #define STARTFILE_SPEC "%{!shared: \ %{!symbolic: \ %{pg:gcrt1.o%s}%{!pg:%{p:mcrt1.o%s}%{!p:crt1.o%s}}}}\ %{pg:gcrti.o%s}%{!pg:crti.o%s} \ %{ansi:values-Xc.o%s} \ %{!ansi:values-Xa.o%s} \ crtbegin.o%s" /* The numbers used to denote specific machine registers in the System V Release 4 DWARF debugging information are quite likely to be totally different from the numbers used in BSD stabs debugging information for the same kind of target machine. Thus, we undefine the macro DBX_REGISTER_NUMBER here as an extra inducement to get people to provide proper machine-specific definitions of DBX_REGISTER_NUMBER (which is also used to provide DWARF registers numbers in dwarfout.c) in their tm.h files which include this file. */ #undef DBX_REGISTER_NUMBER /* Define the actual types of some ANSI-mandated types. (These definitions should work for most SVR4 systems). */ #undef SIZE_TYPE #define SIZE_TYPE "unsigned int" #undef PTRDIFF_TYPE #define PTRDIFF_TYPE "int" #undef WCHAR_TYPE #define WCHAR_TYPE "long int" #undef WCHAR_TYPE_SIZE #define WCHAR_TYPE_SIZE BITS_PER_WORD #define TARGET_POSIX_IO
http://opensource.apple.com/source/llvmgcc42/llvmgcc42-2118/gcc/config/svr4.h
CC-MAIN-2015-48
refinedweb
1,180
56.86
aio_cancel() Cancel an asynchronous I/O operation Synopsis: #include <aio.h> int aio_cancel( int fd, struct aiocb * aiocbptr ); Arguments: - fd - The file descriptor for which you want to cancel asynchronous I/O requests. - aiocbptr - A pointer to an asynchronous I/O control block of type aiocb for the request you want to cancel, or NULL if you want to cancel all requests against the file descriptor. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description:. Returns: - AIO_CANCELED - The requested operation(s) were canceled. - AIO_NOTCANCELED - At least one of the requested operations couldn't be canceled because it was in progress.A return value of AIO_NOTCANCELED doesn't indicate the state of any other operations referenced in the call to aio_cancel(). To determine their status, use aio_error() . - AIO_ALLDONE - All of the operations have already been completed. - -1 - An error occurred; errno is set. Errors: - EBADF - The filedes argument isn't a valid file descriptor. - EINVAL - The control block that aiocbptr points to isn't valid (i.e. it hasn't yet been used in any call to aio_read() or aio_write() ). Classification: Caveats: The first time you call an aio_* function, a thread pool is created, making your process multithreaded if it isn't already. The thread pool isn't destroyed until your process ends.
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/a/aio_cancel.html
CC-MAIN-2019-35
refinedweb
225
58.08
This blog is about Integration Gateway in SAP Mobile Platform 3.0 (SMP). In the previous tutorials, we’ve learned how to deal with REST services as data source for an OData service in SMP. These tutorials were based on the QUERY operation. Now, since SMP SP06, the READ operation is supported as well. This blog explains how to do the required steps. This blog is a follow-up of the tutorial for the QUERY operation, so I’ll skip some of the basic explanations. Why is this blog necessary? For the READ operation, you have to know about 2 specialties: - How to define the relative URL in Eclipse - How to write the script to meet the expected structure of the response. Please find attached the source files used in this tutorial. Prerequisites I expect that you’ve gone through my previous tutorial, explaining REST data source – QUERY operation – XML payload Prerequisites are the same: - Eclipse with SAP Mobile Platform Tools installed - SMP SP06 - Basic knowledge about OData provisioning using the Integration Gateway component of SMP Preparation REST Service For this tutorial, we’ll be using the following REST service as example service for the data source: Please find some info at The service is free and doesn’t require any registration. The reason for choosing this service is that it supports READ of single entries. The URL for reading a single entry is e.g. where the 1 is the identifier of a customer entry Destination In your SMP server, create a Destination that points to this REST service. Please refer to the screenshot for the settings: After saving the destination, try the Test Connection button: it Select the odatasrv file and from the context menu choose “Set Data Source”. In the wizard, you have to first select the Data Source as REST and then select the “Read” operation for the EntitySet “Customers” Note that the “Read” option is only available since SMP SP 06 Click Next and specify the following relative URL and press “Finish”. /sqlrest/CUSTOMER/{ID} How are we supposed to build this relative URL? Here we have to understand: As we already know, any REST service is free to use its own patterns for providing resources. Since no READ operation is explicitly specified for REST services (like for OData), any REST service out there in the internet can implement it in its own preferred way. Which means that our SMP server cannot deduct the URI to find a single entry, only from the service URL (of the REST service) So it’s us who have to provide the information (about how the REST service does the READ) to the SMP server. Let’s have a look at our example REST service. How is the READ implemented there: There’s the segment “CUSTOMER” which provides the list of entries Afterwards a slash Finally a number which is the value of the property “ID” of the thomas-bayer service Translated into a generic expression: CUSTOMER/<value-of-ID-field> (value-of-ID-field means the 1 or 2 or 42 that we can enter at the end of the thomas-bayer URL) And this is what SMP expects from us: A generic expression that provides the key property between curly braces So we have to provide the full URI of the READ, but with a variable instead of the concrete value But the “full URI” has to be a “relative URL”, because it is concatenated with the base URL that is defined in the Destination on the server. Custom Code After finishing the binding wizard, we’re ready to create the script. Within the Project Explorer View, select the “Read” node and choose Define Custom Code from the context menu. Choose Groovy as language. Now we have to implement the processResponseData() method. What do we have to do? Background The REST service that we’re using supports reading of a single entry. The URL returns the following response: Note that the browser doesn’t display the real full payload (the xml header is hidden), so we have to press “view source” to get the real response payload: In our custom code script, we’re supposed to provide the data in a specific structure, as we’ve learned in the previous tutorials. In the case of READ, the expected structure looks as follows: <EntitySet> <Entity> <Property1>“value of property1”</Property1> <Property2>“value of property2”</Property2> <Property3>“value of property3”</Property3> </Entity> </EntitySet> As you can see, the structure is the same like in the QUERY scenario. Note that for reasons of consistency, the structure contains the EntitySet, although the payload of the READ operation doesn’t contain it. In our custom code script, we have to modify the structure of the REST response to match the structure that is expected by the SMP server. Intermediate step For those of you who like to do an intermediate step: before we start to generically modify the response of the REST service in order to meet the expected structure, we can provide a hard-coded response (as we did in the first REST tutorial). Such implementation looks as follows: def Message processResponseData(message) { message.setBody(“<Customers>”+ “<Customer>”+ “<ID>111</ID>”+ “<FIRSTNAME>Jack</FIRSTNAME>”+ “</Customer>”+ “</Customers>”); return message; } After generate, deploy, configure and run the service, you should see the result in the browser. Check the: Modify the structure of the REST response to match the structure that is expected by SMP. Fortunately, in our example the REST service payload is very similar to the expected structure. In detail, what we have to do, is the following: - Remove undesired xml header: <?xml version=”1.0″?> - Remove undesired attributes of the entry tag <CUSTOMER xmlns:xlink=”“> - Rename the entry name to match our EntityType name </Customer> - Surround the entry with opening and closing tags of our EntitySet name <Customers> - Add the opening Customer tag <Customer> This is the simple implementation of the method:; } Result After doing generate&deploy in our Eclipse project, change to your browser and open the Management Cockpit. Assign the destination that we’ve created in the preparation step. Invoke our OData service. Note that we haven’t implemented the QUERY, so we have to directly invoke a URL for READ of a single entry: e.g.<your_service_name>/Customers(’42’) The result is: Summary There should be 2 basic learnings from this tutorial: - How to provide the relative URL when specifying the REST data source - How to write the custom code to meet the expected structure for READ operation in Integration Gateway Those of you who have followed my blog explaining how to use xml parser for creating the expected structure may ask: Is there a follow-up blog explaining how to do this for the READ operation? Well, I’m not intending to create such a blog, since the procedure is exactly the same. Links The prerequisite tutorial that does the same like this blog, but for QUERY operation:: Hi Carlos, thanks for all your detailed tutorials. I followed your guide but I continue to receive the following error. Exchange[ Id ID-v10393-54688-1426679425095-11-7 ExchangePattern InOnly Headers {breadcrumbid=ID-v10393-54688-1426679425095-11-6, CamelRedelivered=false, CamelRedeliveryCounter=0, contenttype=application/xml;charset=utf-8, DestinationName=THOMASSERVERREST, functionname=processRequestData, odatacontext=org.apache.olingo.odata2.core.ODataContextImpl@38c52ed2, odatamethod=GET_ENTRY, relativeuri=/sqlrest/CUSTOMER/42, scriptfile=Customers_REST_Read.groovy, scriptfiletype=groovy, uriinfo=com.sap.gateway.core.ip.odata.OGWUriInfo@1984fa1c} BodyType null Body [Body is null] ] Confusingly this relates to processRequestData which is never used within your tutorial. Any idea? br Martin Hi Martin, thanks for following my tutorials, I’m glad that they’re used;-) From the snippet, I cannot see what error is actually logged. It seems that the processResponse is not invoked, because the service-roundtrip is interrupted while calling the backend REST service I’ve checked the full URL And it is valid. So what else could be the case? Is your destination valid? Have you maybe coincidentally deleted the message object in the processRequestData method…? I mean the line return message; Cheers, Carlos Hi Carlos, thanks for your response. the destination is valid and I didn’t delete anything in the processRequestData method. This is my Customers_REST_Read.groovy import com.sap.gateway.ip.core.customdev.util.Message; /** Function processRequestData will be called before the request uri is handed over to the REST service. User can manipulate the request uri here. */ def Message processRequestData(message) { //You can modify the RelativeUri Header for custom requirements return message; } /** Function processResponseData will be called after the response data is received from the REST service. User can manipulate the response data here. */; } Hi Martin, I’ve tried the tutorial again and it works for me. It even works with your code snippet. Now I’ve tried the following: In my processRequest method, I’ve done return null; This leads to an error and I get the Exchange-entry in the log. In my case, it looks slighty different. E.g. I don’t have the header “DestinationName” and also a header name is different. So maybe the reason is that you have a different/wrong SMP-server-version? Kind Regards, Carlos Hi Carlos, the server was in some kind of error state. I’ve reinstalled another sandbox server and it seems to be working right now. I’ll find a solution for the other server… br martin Hi Carlos, Like you did in ur previous tutorial for ‘Filter’ should we change the Request…?? Rest service understands below URL: but will OData Request Format supports the above Rest Service Instead of CUSTOMER/1 we are calling /Customers(’42’). Regards, Vishnu 🙂 Hi Vishnu, I understand your confusion 😕 and you’re right to ask;-) You’re again right, actually, you have to tell exactly this to the FWK, because it could never know. You tell it already in Designtime, in Eclipse, when you do the binding. You declare that /sqlrest/CUSTOMER/{ID} means /Customers(’42’) Cheers, Carlos Perfecto….. wrking…!!!!!!! 🙂 I like this kind of good news…!… 🙂
https://blogs.sap.com/2015/03/06/integration-gateway-understanding-rest-data-source-7-read-xml/
CC-MAIN-2020-40
refinedweb
1,673
50.87
Creating a module in Sage I am new to Sage, being used to Python and I am having some trouble with adapting. In particular, I am trying to create a 'module' in the Python sense, i.e. some set of classes that I can call with 'import'. When I did this in python in the past, I would create a directory (say /my_module/ in my home) and include a file __'__init____.sage' in that file with a line __all__ = ['Submodule1', 'Submodule2'] Then I would have a file 'Submodule1' and 'Submodule2'. In the file 'Submodule 1' I would define 'Class1' and then from a file 'run.py' in home I would write import my_module from my_module.Submodule1 import Class1 and then I would be able to write c=Class1() to create an object. I find that this is not working in Sage and the only thing I seem to be able to do is write in run.py load("my_module/Submodule1.sage") for each class I want to load. This has many disadvantages, for instance hiding classes from potential users so that they only exist internally. Any suggestions on how to create and import modules in sage? What is the sage way?
https://ask.sagemath.org/question/46994/creating-a-module-in-sage/
CC-MAIN-2019-51
refinedweb
202
73.37
Difference between revisions of "User talk:Pointone" Revision as of 16:38, 20 July 2012 You reverted my changes to Beginners'_Guide/Extra#Message_bus, saying "rc.d script is preferred method of interacting with daemons)". Please correct me if I'm wrong, but I believe the correct call for the rc.d script is '/etc/rc.d/dbus start', not the current '/etc/rc.d start dbus'. Wake 21:09, 3 February 2012 (EST) - Since May 2011, Arch Linux includes the /usr/sbin/rc.dscript that can be used to interface with so-called "rc.d" scripts in /etc/rc.d. Running /etc/rc.d/dbus startis functionally equivalent to running /usr/sbin/rc.d start dbus. See also: initcripts-update-1, initscripts-update-2, rc.d @ initscripts.git. - In the name of consistency, all daemon operations on the wiki should use the rc.d executable. Thank you for pointing that out, I can't believe I missed the Romanian ArchWiki. I just went straight to the English version. Sorry for my late reply , I had some problems with my Internet supplier and couldn't check my e-mails. --Thras0 02:02, 14 November 2011 (EST) Thanks for your remind about blank page deletion. warren 13:00,18 May 2011 (TW, Asia) - 1 2 reports - 2 Section-specific request templates - 3 Padding - 4 To wiki or not to wiki? - 5 Category:Laptops - 6 Please unlock - 7 Please restore "Maven" page - 8 Thanks for the pointer - 9 Thanks for the tips - 10 Do we accept usernames like this one? - 11 Templates: love them or hate them? - 12 "pacman -Syu package" - 13 Deleted Cmd Template? - 14 Start Help:Style? (the Style Guide saga) - 15 i18n category naming - 16 GNOME 3 talk page? - 17 Sorry - 18 PolicyKit article 2 reports - What do you think about the usefulness of this template? (discussion) - I just wanted to point out that the Official Arch Linux Install Guide article still uses an obsolete i18n_entry template. - I've had a number of users contact me regarding the i18n links on the Official Arch Linux Install Guide. It is maintained in AIF git; I will not edit the wiki page directly. However, I've just finally gotten around to submitting a patch to the arch-releng mailing list.) Please unlock I noticed you were the last to edit. AUR Trusted User Guidelines Are you able to unlock, or perhaps grant me privileges to edit it? Thanks. - louipc 15:00, 12 March 2010 (EST) Please restore "Maven" page Your reason for deleting was no content; (pacman -S maven). That's not entirely true, though. I wanted to have the "jre" package and suns java packages, not openjdk, in which case I needed to install maven by hand, which is what the wiki page you deleted explained how to do. - Page content was: Maven is a software project build automation tool. == Installing == === With openjdk === pacman -S maven === With Sun's jdk === See this guide: - Users should not be recommended to manually install software; let pacman track it for you. - The page needs more content. Perhaps a more detailed description of what Maven can do. - The maven package from [community] works fine for me with the jre package (Sun) from [community] -- are you sure manual installation is necessary? - Please categorize new pages. - Feel free to recreate the page with these notes in mind. pointone, is there some mechanism for reporting abuse? Jheena789 added link spam to a page. Perhaps the account should be deleted? I have already undid the change, but wanted to be proactive. Thanks! -- Ryooichi 06:30, 23 July 2010 (EDT) - You can report vandalism using the Spammers page, or by contacting one of the admins directly. Thanks for reporting this incident. Thanks for the pointer Thanks for making me aware of my biased edits. I've fixed them so they are neutral and simply inform users of different possibilities so that they can make the best decision based on what they like. Trusktr 02:18, 31 August! Also, not sure if I am supposed to reply to your note here or on my talk page.. AskApache 01:39, 2 November 2010 (EDT) Any editing suggestions Hey if you want to you could go through my wiki contributions real quick and help me to fix my bad wiki habits.. Or any other suggestions about what I should do differently would be a great help to me.. just if you notice anything, thanks. AskApache 17:49, 16 November 2010 (EST) Do we accept usernames like this one? Karol 09:18, 8 April 2011 (EDT) Templates: love them or hate them? +1 for Template:Cli hate -- Karol 09:24, 8 April 2011 (EDT) - What do you propose? Removing the template altogether? What about Template:Command? -- Kynikos 13:40, 8 April 2011 (EDT) - Why do we have Template:Cli anyway? To get white letters on a dark background? You can add formatting and blinking cursor inside the code parts using the old way (prefixing the code line with a space): [user@host ~]$ {{Cursor}} - Template:Cli is not rendered using <pre> tags so it won't behave as expected with user-defined rules like pre { font-size:1.3em !important; }. - Template:Command has some sense to it so it can stay. I view it as the commandline equivalent of Template:File. - If we decide that e.g. a template is OK, do we need to revert edits such as this one? It's not a personal wiki, so you can't remove things just because you don't like it. -- Karol 14:23, 8 April 2011 (EDT) - Well, with respect to that specific article, it seems he put that template in first, and he himself then removed it, so in my opinion it's completely forgivable. -- Kynikos 15:55, 8 April 2011 (EDT) - About Cli, I happen to quite like it, and I would have many other positive or negative opinions on other templates, but I think that users could never come to a point of agreement over aesthetic matters: either a coherent style is imposed by the admins, or we will have to get definitively used to style variations among the articles (which is not necessarily a bad thing, to some extent). -- Kynikos 15:55, 8 April 2011 (EDT) - I am not personally a fan of Template:Cli but as you say, style is a tricky subject. In cases like this, I defer to the primary/most active maintainer of a page to determine its style. -- pointone 18:21, 8 April 2011 (EDT) - I'm not using the wiki a lot so it doesn't overly bother me and I can always use a firefox boomkarklet to zap the colors but the only benefit I can see of using it over the one-space-at-the-beginning-of-the-line-style is you can indent it easily: - {{Cli|foo}} -- Karol 16:16, 8 April 2011 (EDT) - See? There's always a good side to everything! ^^ In my opinion, the problem is not directly about templates, but about font/box style: for what I have seen, there are 2 main "things" that still need to be styled in a much more coherent way: cli code and file text. Cli code has <pre>, Cli, Command and Codeline (4 different styles) while file text has <pre>, File and I would also add Filename in this list. I think that these 2 groups should be immediately identifiable and distinguishable at first sight: for cli code my proposal would be a similar style to that at stackoverflow.com (both block and inline, this would require 2 templates, but with the same background and font style); for file text I would try a yellowish background for the file name (both inline and block) and very light grey for the content; if filename were to be omitted, only the grey part should be showed (this would be 2/3 templates: inline for filename, block for filename + content and possibly another block for content only). Man, all this would be easier to do than to explain XD Maybe I will reword it better tomorrow -- Kynikos 18:15, 8 April 2011 (EDT) Sorry, I'm doing this very rapidly, anyway these examples should explain my idea better than 10^9 words: This is an example of inline style for code and file text: bla bla bla bla bla codeline args bla bla bla bla /path/to/filename bla bla bla bla bla. This is a block element for cli code: $ code code dbe56v ne4g5fe4 eg45e xdrtgd g5edeht gddgdr This is a block element for file text with file name: /path/to/filename 1 sf5se de4g5ed4 2 de56e e5gt 3 d45ge d45her5 ... ...and this is file text without file name: 1 sf5se de4g5ed4 2 de56e e5gt 3 d45ge d45her5 ... Please, note that these are just examples, take them as bare brainstorming ideas, nothing even close to be definitive, ok? ;) -- Kynikos 06:24, 9 April 2011 (EDT) Sorry, I forgot to mention that this idea follows my previous post, and would make sense only if a precise style were defined by the admins and had to be respected through all the articles -- Kynikos 06:30, 9 April 2011 (EDT) - I'm not sure I like the colors but as I know how to disable them, I don't mind (I know those are only examples). Just make sure it doesn't look too much like Box BLUE template: Karol 08:08, 9 April 2011 (EDT)-- - I don't want to talk about colors, borders..., those would be the last things to choose: what do you think of the idea of having standard color association for code (in the example blueish) and file text (in the example yellowish), both for block and inline elements? I absolutely don't want to create other templates, that'd be completely nonsense: I am trying to propose the implementation of a standard official style for the articles. -- Kynikos 09:42, 9 April 2011 (EDT) - I think it's a great idea, as long as the colors differ from ones used in the other templates, like Note or Box. -- Karol 10:19, 9 April 2011 (EDT) - Well, now we can wait to see if Pointone or other admins release an official style guide, and in that case this formatting could be discussed further. -- Kynikos 12:34, 11 April 2011 (EDT) - Thinking in more detail about templates and formatting, I wonder: do we need to separate "code" and "file" formatting? Originally (that is, before I started updating and reorganizing the template namespace) there was codeline, filename, command, file, and kernel. These were all created at the beginning of 2008. I see the need for only three distinct forms of code formatting: inline code, block elements,and "combo" boxes like templates command, file, and kernel. - Would you be opposed to the consolidation of code formatting templates? -- pointone 12:07, 13 April 2011 (EDT) - Of course you have the last word, but yes, I would oppose unless "combo" boxes are "consolidated" too, otherwise that would look incoherent to me: this solution could lead to a very KISS approach, keeping only 3 templates, but only if, as I've said, combo boxes are involved too. - My approach was completely different, not differentiating formatting depending on *where* (inline/block) or *how* (simple/combo) a piece of code appears in the article, but basing all the (visual) differences mainly on *what* that code is: that's why I was asking for consolidating block code formattings with their respective inline counterparts. This solution would prioritize merging styles before templates. - Anyway, whether you choose officially one solution or the other, it will be a nice improvement to the current situation. -- Kynikos 13:11, 13 April 2011 (EDT) - Both solutions are valid, and, as Kynikos wrote, both would be a step forward from the current situation. I don't mind Pointone's version but I don't think we have to limit ourselves that much as some people might like visually differentiating editing a file from typing a command. - I would like to see some order and consistency in templates and general (visual?) style but I'm not against either of the proposed solutions. -- Karol 13:52, 13 April 2011 (EDT) - @Pointone: seeing your attempt in the Sandbox I try to report again the example of stackoverflow.com, which originally inspired me the idea of inline/block code style similarity; by the way, I don't know if this behaviour can be enabled in the wiki, but there inline code is automatically formatted by enclosing it in backticks (`inline code`). -- Kynikos 20:12, 13 April 2011 (EDT) - I've taken a look to your attempts, and tried something by myself too: User:Kynikos/Random formatting ideas. To be honest, I find your version of the "combo" box still incoherent, because it changes the default color for code, which should be that light blue/cyan. I've come up with some alternatives. -- Kynikos 13:31, 14 April 2011 (EDT) "pacman -Syu package" This could be a result of not having an official package installation syntax after the introduction of "pacman -Syu package": pepole could start forgetting what "y" and "u" even mean, and always repeat them unnecessarily... Also, in the previous section, pacman -Syu is used separately from the package: is it ok? -- Kynikos 13:40, 8 April 2011 (EDT) - I suppose a basic style guide is in order. Along the same lines, I'd also like for wiki editors to stop assuming yaourt as the de facto AUR helper and perhaps something regarding template use/etiquette. (Continues musing...) Any thoughts? -- pointone 20:45, 8 April 2011 (EDT) - You already know my thoughts on "pacman -Syu": keep it separate from "pacman -S package", even at the cost of writing to "first update the system with pacman -Syu" at the beginning of each article. - About AUR packages, I would force the use of makepkg and pacman -U on all wiki articles (ok, I'm using yaourt too, but users should be aware that the U in AUR means unsupported). - About templates read the previous discussion. -- Kynikos 06:36, 9 April 2011 (EDT) Deleted Cmd Template? Hey I went to look at the cmd template we made and it was deleted, I would really like to be able to recover the source from that template, is there any way you can undelete it or send me the source code from it? Also, I still could really use a template like that and the cli and command templates don't cut it for reasons listed on the deleted Cmd templates talk page. I'd also like to get the source code from the examples I added to the talk. AskApache 09:55, 9 April 2011 (EDT) - Recovered. -- pointone 13:06, 10 April 2011 (EDT) - Since this is practically a duplicate template, couldn't you just <nowiki> the template code (thus making it a normal page and not polluting the templates namespace) and invite him to use the Template:Sandbox page? -- Kynikos 12:30, 11 April 2011 (EDT) Start Help:Style? (the Style Guide saga) (this is a continuation of the various discussions on a possible style guide) What if we started contributing to a Help:Style page? At the moment we could mark it as unofficial at the top, but we could find agreements there on article styles and once you think it can go official we can start applying it to the wiki. -- Kynikos 05:01, 2 May 2011 (EDT) - Please, do! I feel like this is something that could be combined with Help:Reading which deals primarily with style, as well. -- pointone 20:43, 2 May 2011 (EDT) i18n category naming Both English category names and localized category names are using in my language's(SimpChinese) wiki. Is it better to use "English Title (Language)" as artical naming? --Cuihao 05:34, 2 May 2011 (EDT) - Short answer: Yes. I responded to your forum thread in more detail. -- pointone 20:44, 2 May 2011 (EDT) GNOME 3 talk page? What happened to GNOME 3 talk page, which now should be in Talk:GNOME? -- Kynikos 04:55, 11 May 2011 (EDT) Sorry I have not read the page cited, now translated to facilitate it. Help:I18n (Português) PolicyKit article I have put in some work on the PolicyKit article which you flagged as needing expansion back in april 2011. I am not sure if I should go ahead and remove the flagging (or if it can reasonably be said to no longer need it?) or ask you to do it? Thanks. Madchine (talk) 16:38, 20 July 2012 (UTC)
https://wiki.archlinux.org/index.php?title=User_talk:Pointone&diff=prev&oldid=213773
CC-MAIN-2018-22
refinedweb
2,776
68.3
Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More Search - "quitting the job" -.11 - - I wrote my resignation letter yesterday, all’s good. Today, bossman walks in: “I’ve got some great news, all our developers are getting a raise” Me: *well shit*15 - - !rant Thinking about quitting my job and opening a bar named "foo" where the walls have a tapestry of random foo-bar code examples. (Easy conversation starter for programmers)8 -4 - -4 - - .13 - I'm about to send a message to the supervisor that will terminate my job. I'm quitting my job. And that's... so exciting! Wish me luck yo - - Today I finished my last day at my customer and in the end my main company after complaining several times. I give them a nice exit email as follow: Title: [302 - 404 - 503] I'm out :-) Content8 -4 - - Quitting job because of Java and legacy corporate OSGI codebase. Being junior developer I'm just done with no documentation, terrible team support and non existent code review. After 18 months I can't justify staying any longer. Never had luck with Java and I guess some things just stay the same. Joined only because of Javascript part, just to be thrown into fullstack position. Stayed way longer because of COVID. Good old simple PHP I loved and foolishly left because of money.4 -?8 - got the job offer XD it's not a big pay increase from what I was making before, but honestly I'd have taken a pay cut to get out of my current fucking job. Hell, I was one more overly dramatic angry email away from quitting on the spot and going to work as a stock boy in some walmart or something. - - Just joined a company 1 week ago and I was tasked to build a cryptocurrency bot in trading view using pinescript. The problem here is that I have zero knowledge in trading, charts , indicators etc and Pinescript is such a miserable language and I am so bored of this. On top of that, nothing here makes sense. Tried learning trading on my own and it is simply boring and I don't understand many things here such as RSI, ATR etc . Sometimes I feel like quitting this job because I feel that I cannot deliver and at the same time I am afraid that I am quitting too soon before even giving a try.22 - - - My first rant. Woohoo! Honestly I do the whole shebang ussualy depending on what the needs are from network to servers to coding because for some reason nobody has any technical experience where I work. I just started app development for a gamedev startup and I am in sheer awe of the amount of transpiling/compiling etc that needs to be done for an multiplatform app for iOS and android with js(x)/typescript, html, css. I remember when I could just write some spaghetti code to make it working by following a couple of tutorials. Then refractoring and testing it for a couple of hours and be done with it. push it into production. Now I am lost having to learn OOP, functional programming, reactjs, react native, express, webpack, mongodb, babel, and the list goes on and on... Why not just make a new backend that does all of that in another language which supports all of that. I have no formal education in programming/coding and the last time I learned JS it was just some if else, switches and simple dom manipulation. I just want to get to coding a freakin' game but I have to learn JSX for the front and typescript on the backend. I am this close to going back to ye ol' lamp stack and quitting this job. 😥5 - Started a new job on Monday. STILL DON'T HAVE ACCESS TO THE FUCKING SERVERS I NEED TO ANYTHING. Holy fucking shit I'm annoyed. Fuck you corporate bullshit. I already feel like quitting.3 - Hey guys, so i got my first job, but there's this stupid problem there that i am having...there's this guy who makes fun of everybody and there are other two guys who laugh at his every joke whenever he makes fun of someone. He made fun of me too a few times, fun of my age, fun of my nose, fun of certain things i said, and those other guys laugh , and this is really frustrating and annoying. I am thinking of quitting..but i am not sure...should i quit for such a small reason? I dont like such people...i dont know what to do...i dont wanna complain to the HR for such a small thing and create more drama...kindly tell me what to do...i really get sad when he indirectly mocks me because of my age. I am a bit old, 31...and the others are in their twenties...please help, thanks32 - I'm getting fired because while, searching for a new job, the hr call my boss to ask him why i was quitting (he didn't know yet - - - it's been a while since my last rant and coming back after so long made me realize how much I missed here. at some point i realized that the career I wanted and my current situation wouldn't match, I decided to go in real hard, I moved into the dreaded backend development (you can guess, node and mongodb), I isolated myself from almost everyone and everything, cleared out my mobile games, social media and for almost two months I wanted something stable(might not be job ready but it had to be reasonable). I have come to love backend development so much, the joy of not having anything to do with css. dad fought me, mum cried, probably thought I was slipping into some deep end, quitting school in my second year of studying food science(still dont know how I accepted that course lol) to start afresh didn't help matters. really hard decisions, made money on some little freelancing gigs, wasnt constant, I needed something stable and that was a job and a degree to get me one. nothing special, just some regular hustler hoping his passion will pay him, I have always loved what i do but I need something to keep me going.5 - I..5 - Just started a new job as a software developer, even though I basically applied as an embedded software developer. I knew from the interview there was gonna be alot of legacy / high level stuff and they were pushing me away from embedded with the promise I could do it 'later on'. Finally started and it turns out there's a shit tonne of legacy Python code for their non-existent test framework that's basically tied directly into a Qt GUI app and I'm doing shit that nobody else wants to do. Can't see myself wanting to do this for anywhere more than 2-3 months. Should I just bail now? Seems a bit dodgy if I leave having only worked there for a week? Job actually pays really well though. Plan was to take an extended vacation around July/August, so quitting this early and then telling another employer later on that I need to bail for summer seems wrong also, not to mention COVID sucks and is making everything hell now.12 - Job BS that made me consider quitting? Huh. so timely. With my previous employer, it was the whole "we're doing Agile and sprints and all the things" with "finish the project in six weeks plus here are some more requirements" garbage. Plus my tech lead always let the business roll over her and add unplanned requirements during a sprint without adjusting the deadlines set by the project managers. In summary: a fuck-all combination of Waterfall deadlines, Kanban tickets and Scrum timeboxes. At my current employer, it's our business partners who're a bunch of douchebags that don't plan for anything except making sure their bonuses stay intact. Recently they terminated support for a third-party product that literally drives 99% of their web application then says to us "Hey, we need to build our own replacement for the vendor product using an entirely new stack. You have 3 months or our clients will get pissed." Oh, and these business partners keep raising new issues without any documentary basis except "this doesn't feel right" when they test our in-progress work. So helpful <sarcasm /> On the bright side, I'm getting paid whether or not this project fails, so... me - - Off work at 5:30. In about 10 minutes I'm winning the lottery, then I'm quitting my job and buying the least technical business ever... A brothel. - RANT Soooo close to quitting my job in the social field.. and then re-start in the IT field... Switching shifts and such high illness rate are making me exhausted... - I’ve left a bad job a month or so ago. Now I have less long story rants about the constant bullshit. You win some you lose some I guess - Not mentoring per say... But I've had some colleagues that took quitting the job to another level, which can be just as inspiring as a good mentor - -... need 750++'s to get my avatar a pair of slippers that I got for free after quitting my job for which the shoes came free. #include "irony.much"; - Make SaaS and quitting my job for live from the passive incoming. Create and invest in a outsourcing development company, for repeat the cycle, but this time i'll be the b0ss1 -. - Been studying for the last 4 months, and was hoping to hold onto my job until I signed on the dotted line for a new job, but I just can't take this job anymore. Does anyone have experience quitting their job to interview/find a new job? My only concern is that I have rent to pay, but I have some reserved funds to get through this time I believe.1 Top Tags
https://devrant.com/search?term=quitting+the+job
CC-MAIN-2021-39
refinedweb
1,717
69.82
Next Chapter: Replacing Values in DataFrames and Series The underlying idea of a DataFrame is based on spreadsheets. We can see the data structure of a DataFrame as tabular and spreadsheet-like. A DataFrame logically corresponds to a "sheet" of an Excel document. A DataFrame has both a row and a column index. Like a spreadsheet or Excel sheet, a DataFrame object contains an ordered collection of columns. Each column consists of a unique data typye, but different columns can have different types, e.g. the first column may consist of integers, while the second one consists of boolean values and so on. There is a close connection between the DataFrames and the Series of Pandas. A DataFrame can be seen as a concatenation of Series, each Series having the same index, i.e. the index of the DataFrame. We will demonstrate this in the following example. We define the following three Series: import pandas as pd years = range(2014, 2018) shop1 = pd.Series([2409.14, 2941.01, 3496.83, 3119.55], index=years) shop2 = pd.Series([1203.45, 3441.62, 3007.83, 3619.53], index=years) shop3 = pd.Series([3412.12, 3491.16, 3457.19, 1963.10], index=years) What happens, if we concatenate these "shop" Series? Pandas provides a concat function for this purpose: pd.concat([shop1, shop2, shop3]) 2014 2409.14 2015 2941.01 2016 3496.83 2017 3119.55 2014 1203.45 2015 3441.62 2016 3007.83 2017 3619.53 2014 3412.12 2015 3491.16 2016 3457.19 2017 1963.10 dtype: float64 This result is not what we have intended or expected. The reason is that concat used 0 as the default for the axis parameter. Let's do it with "axis=1": shops_df = pd.concat([shop1, shop2, shop3], axis=1) shops_df Let's do some fine sanding by giving names to the columns: cities = ["Zürich", "Winterthur", "Freiburg"] shops_df.columns = cities print(shops_df) # alternative way: give names to series: shop1.name = "Zürich" shop2.name = "Winterthur" shop3.name = "Freiburg" print("------") shops_df2 = pd.concat([shop1, shop2, shop3], axis=1) print(shops_df2) Zürich Winterthur Freiburg 2014 2409.14 1203.45 3412.12 2015 2941.01 3441.62 3491.16 2016 3496.83 3007.83 3457.19 2017 3119.55 3619.53 1963.10 ------ Zürich Winterthur Freiburg 2014 2409.14 1203.45 3412.12 2015 2941.01 3441.62 3491.16 2016 3496.83 3007.83 3457.19 2017 3119.55 3619.53 1963.10 This was nice, but what kind of data type is our result? print(type(shops_df)) <class 'pandas.core.frame.DataFrame'> This means, we can arrange or concat Series into DataFrames! cities = {"name": ["London", "Berlin", "Madrid", "Rome", "Paris", "Vienna", "Bucharest", "Hamburg", "Budapest", "Warsaw", "Barcelona", "Munich", "Milan"], "population": [8615246, 3562166, 3165235, 2874038, 2273305, 1805681, 1803425, 1760433, 1754000, 1740119, 1602386, 1493900, 1350680], "country": ["England", "Germany", "Spain", "Italy", "France", "Austria", "Romania", "Germany", "Hungary", "Poland", "Spain", "Germany", "Italy"]} city_frame = pd.DataFrame(cities) city_frame city_frame.columns.values array(['name', 'population', 'country'], dtype=object) ordinals = ["first", "second", "third", "fourth", "fifth", "sixth", "seventh", "eigth", "ninth", "tenth", "eleventh", "twelvth", "thirteenth"] city_frame = pd.DataFrame(cities, index=ordinals) city_frame Rearranging the Order of Columns We can also define and rearrange the order of the columns at the time of creation of the DataFrame. This makes also sure that we will have a defined ordering of our columns, if we create the DataFrame from a dictionary. Dictionaries are not ordered, as you have seen in our chapter on Dictionaries in our Python tutorial, so we cannot know in advance what the ordering of our columns will be: city_frame = pd.DataFrame(cities, columns=["name", "country", "population"]) city_frame We change both the column order and the ordering of the index with the function reindex with the following code: city_frame.reindex(index=[0, 2, 4, 6, 8, 10, 12, 1, 3, 5, 7, 9, 11], columns=['country', 'name', 'population']) Now, we want to rename our columns. For this purpose, we will use the DataFrame method 'rename'. This method supports two calling conventions - (index=index_mapper, columns=columns_mapper, ...) - (mapper, axis={'index', 'columns'}, ...) We will rename the columns of our DataFrame into Romanian names in the following example. We set the parameter inplace to True so that our DataFrame will be changed instead of returning a new DataFrame, if inplace is set to False, which is the default! city_frame.rename(columns={"name":"Soyadı", "country":"Ülke", "population":"Nüfus"}, inplace=True) city_frame city_frame = pd.DataFrame(cities, columns=["name", "population"], index=cities["country"]) city_frame Alternatively, we can change an existing DataFrame. We can us the method set_index to turn a column into an index. "set_index" does not work in-place, it returns a new data frame with the chosen column as the index: city_frame = pd.DataFrame(cities) city_frame2 = city_frame.set_index("country") print(city_frame2) We saw in the previous example that the set_index method returns a new DataFrame object and doesn't change the original DataFrame. If we set the optional parameter "inplace" to True, the DataFrame will be changed in place, i.e. no new object will be created: city_frame = pd.DataFrame(cities) city_frame.set_index("country", inplace=True) print(city_frame) city_frame = pd.DataFrame(cities, columns=("name", "population"), index=cities["country"]) print(city_frame.loc["Germany"]) name population Germany Berlin 3562166 Germany Hamburg 1760433 Germany Munich 1493900 print(city_frame.loc[["Germany", "France"]]) name population Germany Berlin 3562166 Germany Hamburg 1760433 Germany Munich 1493900 France Paris 2273305 print(city_frame.loc[city_frame.population>2000000]) name population England London 8615246 Germany Berlin 3562166 Spain Madrid 3165235 Italy Rome 2874038 France Paris 2273305 print(city_frame.sum()) name LondonBerlinMadridRomeParisViennaBucharestHamb... population 33800614 dtype: object city_frame["population"].sum() 33800614 We can use "cumsum" to calculate the cumulative sum: x = city_frame["population"].cumsum() print(x) England 8615246 Germany 12177412 Spain 15342647 Italy 18216685 France 20489990 Austria 22295671 Romania 24099096 Germany 25859529 Hungary 27613529 Poland 29353648 Spain 30956034 Germany 32449934 Italy 33800614 Name: population, dtype: int64 city_frame["population"] = x print(city_frame) name population England London 8615246 Germany Berlin 12177412 Spain Madrid 15342647 Italy Rome 18216685 France Paris 20489990 Austria Vienna 22295671 Romania Bucharest 24099096 Germany Hamburg 25859529 Hungary Budapest 27613529 Poland Warsaw 29353648 Spain Barcelona 30956034 Germany Munich 32449934 Italy Milan 33800614 Instead of replacing the values of the population column with the cumulative sum, we want to add the cumulative population sum as a new culumn with the name "cum_population". city_frame = pd.DataFrame(cities, columns=["country", "population", "cum_population"], index=cities["name"]) city_frame We can see that the column "cum_population" is set to Nan, as we haven't provided any data for it. We will assign now the cumulative sums to this column: city_frame["cum_population"] = city_frame["population"].cumsum() city_frame We can also include a column name which is not contained in the dictionary, when we create the DataFrame from the dictionary. In this case, all the values of this column will be set to NaN: city_frame = pd.DataFrame(cities, columns=["country", "area", "population"], index=cities["name"]) print(city_frame) country area population London England NaN NaN 1760433 Budapest Hungary NaN 1754000 Warsaw Poland NaN 1740119 Barcelona Spain NaN 1602386 Munich Germany NaN 1493900 Milan Italy NaN 1350680 # in a dictionary-like way: # as an attribute print(type(city_frame.population)) <class 'pandas.core.series.Series'> From the previous example, we can see that we have not copied the population column. "p" is a view on the data of city_frame. city_frame["area"] = 1572 print(city_frame) country area population London England 1572 8615246 Berlin Germany 1572 3562166 Madrid Spain 1572 3165235 Rome Italy 1572 2874038 Paris France 1572 2273305 Vienna Austria 1572 1805681 Bucharest Romania 1572 1803425 Hamburg Germany 1572 1760433 Budapest Hungary 1572 1754000 Warsaw Poland 1572 1740119 Barcelona Spain 1572 1602386 Munich Germany 1572 1493900 Milan Italy 1572 1350680 In this case, it will be definitely better to assign the exact area to the cities. The list with the area values needs to have the same length as the number of rows in our DataFrame. # area in square km: area = [1572, 891.85, 605.77, 1285, 105.4, 414.6, 228, 755, 525.2, 517, 101.9, 310.4, 181.8] # area could have been designed as a list, a Series, an array or a scalar city_frame["area"] = area print(city_frame) country area population London England 1572.00 8615246 Berlin Germany 891.85 3562166 Madrid Spain 605.77 3165235 Rome Italy 1285.00 2874038 Paris France 105.40 2273305 Vienna Austria 414.60 1805681 Bucharest Romania 228.00 1803425 Hamburg Germany 755.00 1760433 Budapest Hungary 525.20 1754000 Warsaw Poland 517.00 1740119 Barcelona Spain 101.90 1602386 Munich Germany 310.40 1493900 Milan Italy 181.80 1350680 city_frame = city_frame.sort_values(by="area", ascending=False) print(city_frame) country area population London England 1572.00 8615246 Rome Italy 1285.00 2874038 Berlin Germany 891.85 3562166 Hamburg Germany 755.00 1760433 Madrid Spain 605.77 3165235 Budapest Hungary 525.20 1754000 Warsaw Poland 517.00 1740119 Vienna Austria 414.60 1805681 Munich Germany 310.40 1493900 Bucharest Romania 228.00 1803425 Milan Italy 181.80 1350680 Paris France 105.40 2273305 Barcelona Spain 101.90 1602386 Let's assume, we have only the areas of London, Hamburg and Milan. The areas are in a series with the correct indices. We can assign this series as well: city_frame = pd.DataFrame(cities, columns=["country", "area", "population"], index=cities["name"]) some_areas = pd.Series([1572, 755, 181.8], index=['London', 'Hamburg', 'Milan']) city_frame['area'] = some_areas print(city_frame) country area population London England 1572.0 755.0 1760433 Budapest Hungary NaN 1754000 Warsaw Poland NaN 1740119 Barcelona Spain NaN 1602386 Munich Germany NaN 1493900 Milan Italy 181.8 1350680 Inserting new columns into existing DataFrames In the previous example we have added the column area at creation time. Quite often it will be necessary to add or insert columns into existing DataFrames. For this purpose the DataFrame class provides a method "insert", which allows us to insert a column into a DataFrame at a specified location: insert(self, loc, column, value, allow_duplicates=False)` The parameters are specified as: city_frame = pd.DataFrame(cities, columns=["country", "population"], index=cities["name"]) idx = 1 city_frame.insert(loc=idx, column='area', value=area) city_frame growth = {"Switzerland": {"2010": 3.0, "2011": 1.8, "2012": 1.1, "2013": 1.9}, "Germany": {"2010": 4.1, "2011": 3.6, "2012": 0.4, "2013": 0.1}, "France": {"2010":2.0, "2011":2.1, "2012": 0.3, "2013": 0.3}, "Greece": {"2010":-5.4, "2011":-8.9, "2012":-6.6, "2013": -3.3}, "Italy": {"2010":1.7, "2011": 0.6, "2012":-2.3, "2013":-1.9} } growth_frame = pd.DataFrame(growth) growth_frame You like to have the years in the columns and the countries in the rows? No problem, you can transpose the data: growth_frame.T growth_frame = growth_frame.T growth_frame2 = growth_frame.reindex(["Switzerland", "Italy", "Germany", "Greece"]) print(growth_frame2) 2010 2011 2012 2013 Switzerland 3.0 1.8 1.1 1.9 Italy 1.7 0.6 -2.3 -1.9 Germany 4.1 3.6 0.4 0.1 Greece -5.4 -8.9 -6.6 -3.3 import numpy as np names = ['Frank', 'Eve', 'Stella', 'Guido', 'Lara'] index = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"] df = pd.DataFrame((np.random.randn(12, 5)*1000).round(2), columns=names, index=index) df Next Chapter: Replacing Values in DataFrames and Series
https://python-course.eu/pandas_DataFrame.php
CC-MAIN-2020-29
refinedweb
1,872
69.68
Summary I've barely scratched the surface of what you can do with vectors and container classes. For example, you can create a vector that holds something other than integers. The type vector<string> would hold strings. You can also insert an item into a vector. To do so, you create an iterator, point the iterator to the position where you want to insert the item, and then call insert. Here's a quick example you can try; add this to the preceding example after the last push_back call: vector<int>::iterator pos = storage.begin() + 2; storage.insert(pos, 5); But you can use more than just vectors. One handy container is the map container. A map is like a vector except that the indexes don't have to be integers. They can be some type that you specify when you create the map. Here's a quick example you can try. First, add this line to your includes: #include <map> And then try this: map<string, int> mymap; mymap["Jeff"] = 35; mymap["Amy"] = 31; cout << mymap["Amy"] << endl; This creates a map that uses strings for its indexes, and it holds integers. Once you get the swing of containers, you'll find that anytime you need any kind of storage, you'll automatically think of them and skip the lower-level containers such as arrays altogether. Have fun!
http://www.informit.com/articles/article.aspx?p=102155&seqNum=3
CC-MAIN-2018-22
refinedweb
229
64.61
After folowing patches, the recipe doesn't work anymore. - build: build everything from the root dir, use obj=$subdir - build: introduce if_changed_deps First patch mean that $(names) already have $(path), and the second one, the .*.d files are replaced by .*.cmd files which are much simpler to parse here. Also replace the makefile programming by a much simpler shell command. This doesn't check anymore if the source file exist, but that can be fixed by running `make clean`, and probably doesn't impact the calculation. `cloc` just complain that some files don't exist. Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> --- xen/Makefile | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/xen/Makefile b/xen/Makefile index 36a64118007b..b09584e33f9c 100644 --- a/xen/Makefile +++ b/xen/Makefile @@ -490,14 +490,7 @@ _MAP: .PHONY: cloc cloc: - $(eval tmpfile := $(shell mktemp)) - $(foreach f, $(shell find $(BASEDIR) -name *.o.d), \ - $(eval path := $(dir $(f))) \ - $(eval names := $(shell grep -o "[a-zA-Z0-9_/-]*\.[cS]" $(f))) \ - $(foreach sf, $(names), \ - $(shell if test -f $(path)/$(sf) ; then echo $(path)/$(sf) >> $(tmpfile); fi;))) - cloc --list-file=$(tmpfile) - rm $(tmpfile) + find . -name '*.o.cmd' -exec awk '/^source_/{print $$3;}' {} + | cloc --list-file=- endif #config-build --.
https://lists.xenproject.org/archives/html/xen-devel/2021-08/msg01035.html
CC-MAIN-2021-49
refinedweb
202
67.76
TestSuite (see Figure B-10) is a class representing a collection of Test s. Since it implements Test , it can be run just like a TestCase . When run, a TestSuite runs all the Test s it contains. It may contain both TestCase s and other TestSuite s. A TestSuite can be constructed by giving it the class name of a TestCase . The TestSuite constructor uses reflection to find all methods in the TestCase having names starting with test . The code below adds all of BookTest 's test methods to a TestSuite and runs it: TestSuite test = new TestSuite( BookTest.class ); test.run( new TestResult( ) ); Tests also can be added to a TestSuite using the addTest( ) method. public class TestSuite extends Object implements Test A constructor that creates an empty TestSuite . A constructor that creates an empty TestSuite with the given name. A constructor that takes a Class , uses reflection to find all methods with names starting with test , and adds them to the TestSuite as test methods. A constructor that creates a TestSuite with the given name and all test methods found in the Class , as described for the previous constructor. Adds a Test to the TestSuite . Adds the test methods from the Class to the TestSuite . Test methods are found using reflection. Returns the total number of test cases that will be run by this TestSuite . Test cases are counted by recursively calling countTestCases( ) for every Test in this TestSuite . Creates an instance of Class as a Test with the given name. Returns the name of the TestSuite . Gets a constructor for the given Class that takes a single String as its argument, or gets a constructor that takes no arguments. Runs the Test s in this TestSuite and collects the results in TestResult . Runs Test and collects the results in TestResult . Sets the name of the TestSuite . Returns the Test at the given index. Returns the number of Test s in this TestSuite . Returns the Test s as an Enumeration . Returns a string representation of this TestSuite . A private method to add a test method to this TestSuite . Returns the Throwable 's stack trace as a string. Returns TRUE if Method has public access. A private method that returns TRUE if Method has no arguments, returns void , and has public access. Returns a Test that will fail and logs a warning message. The name of this TestSuite . The Test s contained by this TestSuite .
https://flylib.com/books/en/1.104.1.81/1/
CC-MAIN-2018-43
refinedweb
406
76.32
-- .Channels.Base where import Control.Concurrent.STM import Control.Monad import Control.Monad.Trans import Data.Unique (Unique) import Control.Concurrent.CHP.Base import Control.Concurrent.CHP.Event import Control.Concurrent.CHP.Poison -- | A reading channel-end type. -- -- See 'reader' to obtain one, and 'ReadableChannel' for how to use one. -- -- Eq instance added in version 1.1.1 newtype Chanin a = Chanin (STMChannel a) deriving Eq -- | A writing channel-end type. -- -- See 'writer' to obtain one, and 'WritableChannel' for how to use one. -- -- Eq instance added in version 1.1.1 newtype Chanout a = Chanout (STMChannel a) deriving Eq newtype STMChannel a = STMChan (Event, TVar (WithPoison (Maybe a, Maybe ()))) deriving Eq -- |} return (getEventUnique e, STMChan (e,c)) where getVal PoisonItem = Nothing getVal (NoPoison (x, _)) = x -- Some of this is defensive programming -- the writer should never be able -- to discover poison in the channel variable, for example consumeData :: TVar (WithPoison (Maybe a, Maybe ())) -> STM (WithPoison a) consumeData tv = do d <- readTVar tv case d of PoisonItem -> return PoisonItem NoPoison (Nothing, _) -> retry NoPoison (Just x, a) -> do writeTVar tv $ NoPoison (Nothing, a) return $ NoPoison x sendData :: TVar (WithPoison (Maybe a, Maybe ())) -> a -> STM (WithPoison ()) sendData tv x = do y <- readTVar tv case y of PoisonItem -> return PoisonItem NoPoison (Just _, _) -> error "CHP: Found data while sending data" NoPoison (Nothing, a) -> do writeTVar tv $ NoPoison (Just x, a) return $ NoPoison () consumeAck :: TVar (WithPoison (Maybe a, Maybe ())) -> STM (WithPoison ()) consumeAck tv = do d <- readTVar tv case d of PoisonItem -> return PoisonItem NoPoison (_, Nothing) -> retry NoPoison (x, Just _) -> do writeTVar tv $ NoPoison (x, Nothing) return $ NoPoison () sendAck :: TVar (WithPoison (Maybe a, Maybe ())) -> STM (WithPoison ()) sendAck tv = do d <- readTVar tv case d of PoisonItem -> return PoisonItem NoPoison (_, Just _) -> error
http://hackage.haskell.org/package/chp-1.8.0/docs/src/Control-Concurrent-CHP-Channels-Base.html
CC-MAIN-2014-52
refinedweb
291
52.09
Here are top C# interview questions and answers. These C# interview questions are for both beginners and professional C# developers. We will add more questions time by time. What is C#?# with the help of Visual Studio IDE provides a rapid application development. C# is a modern, object-oriented, simple, versatile, and performance-oriented programming language. C# is developed based on the best features and use cases of several programming languages including C++, Java, Pascal, and SmallTalk.. What is an object in C#? The terms class and object describe the type of objects, and the instances of classes, respectively. So, the act of creating an object is called instantiation. Using the blueprint analogy, a class is a blueprint, and an object is a building made from that blueprint.. What is Managed or Unmanaged Code? Unmanaged Code - Applications that are not under the control of the CLR are unmanaged - The unsafe code or the unmanaged code is a code block that uses a pointer variable. - The unsafe modifier allows pointer usage in unmanaged code. Managed Code Managed code is a code whose execution is managed by Common Language Runtime. It gets the managed code and compiles it into machine code. After that, the code is executed.The runtime here i.e. CLR provides automatic memory management, type safety, etc. Managed code is written in high-level languages run on top of .NET. This can be C#, F#, etc. A code compiled in any of this language with their compilers, a machine code is not generated. However, you will get the Intermediate Language code, compiled and executed by runtime. Can multiple catch blocks be executed? In C#, You can use more than one catch block with the try block. Generally, multiple catch block is used to handle different types of exceptions means each catch block is used to handle different type of exception. If you use multiple catch blocks for the same type of exception, then it will give you a compile-time error because C# does not allow you to use multiple catch block for the same type of exception. A catch block is always preceded by the try block. In general, the catch block is checked within the order in which they have occurred in the program. If the given type of exception is matched with the first catch block, then first catch block executes and the remaining of the catch blocks are ignored. And if the starting catch block is not suitable for the exception type, then compiler search for the next catch block. try { } catch (IOException ex1) { // Code Block 1 } catch (Exception ex2) { // Code Block 2 } In above example if exception is IOException then Code Block 1 will be executed otherwise for other exceptions Code Block 2 will be executed. What is Boxing and Unboxing in C#? Boxing is the process of converting a value type to the type object or to any interface type implemented by this value type. When the common language runtime (CLR) boxes a value type, it wraps the value inside a System.Object instance and stores it on the managed heap. extracts the value type from the object. Boxing is implicit; unboxing is explicit. The concept of boxing and unboxing underlies the C# unified view of the type system in which a value of any type can be treated as an object. // C# implementation to demonstrate // the Unboxing using System; class GFG { // Main Method static public void Main() { // assigned int value // 23 to num int num = 23; // boxing object obj = num; // unboxing int i = (int)obj; // Display result Console.WriteLine("Value of ob object is : " + obj); Console.WriteLine("Value of i is : " + i); } } What is the difference between public, static, and void? Public, Static and Void comes under different categories. Public is the access specifier, static is a keyword and void comes under return type. let me explain one by one: Public: Whenever a method, class or a class member declared with public access specifier. it means that the member can be access from any where. There is no restriction on accessing the public member. - If a class is declared us Public we can access the class wherever we want. - if a member of the class is public then we can access the member from anywhere there wont be any restriction on this but class has it restriction if the class isn’t declared public. Static keyword is used to mention that the static member value won’t change in anytime). throughout the application cycle static member gives same value with referring same memory location. - if a variable is declared with static keyword and assigned so some value. While executing a memory blocks get assigned to the static variable. We can able to access the static members without any instance. We can access the member from any where in the application it will same value referring to same memory location. Void keyword is used in the return type. which actually stating that the method with void return type wont return anything (it returns nothing. don’t think nothing as null because null is a value). What is Jagged Arrays? Jagged Array is an array whose elements are arrays with different dimensions and sizes. Sometimes a jagged array called an “array of arrays” and it can store arrays instead of a particular data type value. // Jagged Array with Single Dimensional Array int[][] jarray = new int[2][]; // Jagged Array with Two Dimensional Array int[][,] jarray1 = new int[3][,]; With jagged arrays, we can store (efficiently) many rows of varying lengths. No space (on the tops of the skyscrapers) is wasted. Any type of data—reference or value—can be used.(); } } } Output 1 2 0 3 4 5 Can I declare properties in interface? In C#, a class or a struct can implement one or more interfaces. In C#, an interface can be defined using the interface keyword. Interfaces can contain methods, properties, indexers, and events as members. … An interface can only contain declarations but not implementations. Interfaces are contracts to be fulfilled by implementing classes. Hence they can consist of public methods, properties and events (indexers are permitted too). Variables in Interfaces – NO. Can you elaborate on why you need them? You can have variables in Base classes though. Properties in Interfaces – Yes, since they are paired methods under the hood. Members of an interface are implicitly public. You cannot specify access modifiers explicitly What is use of getter & setter? Getters and Setters are used to effectively protect your data, particularly when creating classes. For each instance variable, a getter method returns its value while a setter method sets or updates its value. Getters and setters are also known as accessors and mutators, respectively. By convention, getters start with get, followed by the variable name, with the first letter of the variable name capitalized. Setters start with set, followed by the variable name, with the first letter of the variable name. public stativ void main(String[] args) { Vehicle v1 = new Vehicle(); v1.setColor("Red"); System.out.println(v1.getColor()); } // Outputs "Red" Getters and setters allow control over the values. You may validate the given value in the setter before actually setting the value. How can we restrict class to instantiate? - We declare a class as abstract to prevent us from creating an object of the class - If you try to create an instance of a static class then it also prompts you with an error after implementation of the code - If we declare a private or protected constructor then it also prevents us from creating an instance of the class Define Constructors. In C#, constructor is a special method which is invoked automatically at the time of object creation. It is used to initialize the data members of new object generally. The constructor in C# has the same name as class or struct. There are different types of constructors in C#. - Default Constructor - Parameterized Constructor - Copy Constructor - Static Constructor - Private Constructor In case, if we create a class without having any constructor, then the compiler will automatically create a one default constructor for that class. So, there is always one constructor that will exist in every class. In c#, a class can contain more than one constructor with different types of arguments and the constructors will never return anything, so we don’t need to use any return type, not even void while defining the constructor method in the class. public class User { // Constructor public User() { // Your Custom Code } } What is the difference between ref & out parameters? Ref and out parameters are used to pass an argument within a method. */ What is the difference between a struct and a class in C#? A class is a user-defined blueprint or prototype from which objects are created. would happen when we define private constructor? The use of private constructor is to serve singleton classes. … Using private constructor we can ensure that no more than one object can be created at a time. By providing a private constructor you prevent class instances from being created in any place other than this very class. What is the use of ‘using’ statement in C#? The using statement is mostly used when you need to one or more resources in a segment. The using statement obtains one or various resources, executes them and then releases the objects or resources. It is widely used in database connectivity through C#.. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Using_Statement { class check_using : IDisposable { public void Dispose() { Console.WriteLine("Execute Second"); } } class Program { static void Main(string[] args) { using (check_using c = new check_using()) { Console.WriteLine("Executes First"); } Console.WriteLine("Execute Third"); Console.ReadLine(); } } } Output : Executes First Execute Second Execute Third What is the difference between Interface and Abstract Class in C#? Can we use “this” command within a static method? No because static method does not need any object to be called, and this keyword always point to a current object of a class. simply if there is no object then how the keyword point to any current object so,we cannot use this keyword here. What is the difference between constants and read-only? Const refers to a constant variable and the value of which need to be assigned only once, during declaration. When the keyword const is used during a declaration, it becomes a constant meaning the value remains constant and cannot be changed throughout the program. It’s more of a reserved word which specifies that a value must not be modified after compile-time. A const is implicitly static by default, so it can be called with the class name using “Classname.VariableName”. The constant values are also called as literals. It can be of any basic data type such as an integer const, a floating const, or a string literal. The Readonly keyword is a modifier that can be used only on fields but not local variables. A readonly field can be initialized either at the time of declaration or inside a constructor in the same class, so the values can be different for different executions of the program depending on the constructor used. The readonly keyword specifies that an instance variable of an object is not modifiable and any attempt to modify it after declaration results in compilation error. The variable doesn’t become non-modifiable until after the execution. Variables are made readonly only to stop calling the code from accidentally modifying it after it’s constructed. What are sealed classes in C#?. Can we create instance of the class within the same class if we make constructor to private? It means, if we have a private constructor in a class then its objects can be instantiated within the class only. So in simpler words you can say, if the constructor is private then you will not be able to create its objects outside the class. What is Idisposable interface? The .NET classes implement the IDisposable interface, which provides a Dispose method to release unmanaged resources owned by the object instance. This is the standard pattern for releasing non-memory resources in .NET. .NET Framework defines a interface for types requiring a tear-down method: public interface IDisposable { void Dispose(); } Dispose() is primarily used for cleaning up resources, like unmanaged references. However, it can also be useful to force the disposing of other resources even though they are managed. Instead of waiting for the GC to eventually also clean up your database connection, you can make sure it’s done in your own Dispose() implementation. public void Dispose() { if (null != this.CurrentDatabaseConnection) { this.CurrentDatabaseConnection.Dispose(); this.CurrentDatabaseConnection = null; } } Will garbage collector call dispose method? The .Net Garbage Collector calls the Object.Finalize method of an object on garbage collection. By default this does nothing and must be overidden if you want to free additional resources. Dispose is NOT automatically called and must be explicity called if resources are to be released, such as within a ‘using’ or ‘try finally’ block What is enum in C#?’. - The enum is a set of named constant. - The value of enum constants starts from 0. Enum can have value of any valid numeric type. - String enum is not supported in C#. - Use of enum makes code more readable and manageable. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace DemoApplication { class Program { enum Days{Sun,Mon,tue,Wed,thu,Fri,Sat}; static void Main(string[] args) { Console.Write(Days.Sun); Console.ReadKey(); } } } What is the difference between “continue” and “break” statements in C#? Break (breaks the loop/switch) Break statement is used to terminate the current loop iteration or terminate the switch statement in which it appears Continue (skip the execution of current iteration) The continue statement is not same as break statement. Break statement breaks the loop/switch whereas continue skip the execution of current iteration only and it does not break the loop/switch i.e. it passes the control to the next iteration of the enclosing while loop, do while loop, for loop or for each statement in which it appears. What is the difference between Array and Arraylist? Who will call garbage collector? Garbage collector is automatically called but sometimes it varies depending on the conditions. What are Properties in C#? Properties in C# are class. - Properties can validate data before allowing a change. - Properties can transparently expose data on a class where that data is actually retrieved from some other source such as a database. - Properties can take an action when data is changed, such as raising an event or changing the value of other fields. Does garbage collector called directly by dispose method ? The GC will NOT call the Dispose() method on the interface, but it will call the finalizer for your object. What are extension methods in. Click on this link for the detailed information about extension methods Extension Method What is the difference between the dispose and finalize methods in C#? Finalize: - Finalize() belongs to the Object class. - It is automatically called by the Garbage Collection mechanism when the object goes out of the scope(usually at the end of the program - It is slower method and not suitable for instant disposing of the objects. - It is non-deterministic function i.e., it is uncertain when Garbage Collector will call Finalize() method to reclaim memory. class employee { //This is the destructor of emp class ~employee() { } //This destructor is implicitly compiled to the Finalize method. } Dispose: - Dispose() belongs to the IDisposable interface - We have to manually write the code to implement it(User Code) ex: if we have emp class we have to inherit it from the IDisposable interface and write code. We may have to suppress the Finalize method using GC.SuppressFinalize() method. - Faster method for instant disposal of the objects. - It is deterministic function as Dispose() method is explicitly called by the User Code. user interface Controls. Forms, SqlConnection class have built in implementaion of Dispose method. try { string constring = "Server=(local);Database=my; User Id=sa; Password=sa"; SqlConnection sqlcon = new SqlConnection(constring); sqlcon.Open(); // here connection is open // some code here which will be execute } catch { // code will be execute when error occurred in try block } finally { sqlcon.Close(); // close the connection sqlcon.Dispose(); // detsroy the connection object } You can also check this link Difference between Finalize, Destructor and Dispose in C# What are the differences between System.String and System.Text.StringBuilder classes? A. You can also check this link string and StringBuilder What is Thread safe collection is c#? multiple threads will write to the collection concurrently. What is Thread synchronisation? Thread Synchronization is a mechanism which ensures that two or more concurrent process or threads do not execute some particular section of program especially critical section. In this technique one thread executes the critical section of a program and other thread wait until the first thread finishes execution. If proper synchronization mechanism will be not applied then race condition will happen. Thread Synchronization Deals with the following conditions: - Deadlock - Starvation - Priority Inversion - Busy Waiting TheFollowing are some classic problems of Synchronization: - The Producer-Consumer Problem - The Readers-Writers Problem - The Dining Philosopher Problem These problems are used to test every newly proposed synchronization techniques. What are delegates in C# and the uses of delegates? A delegate in C# is similar to a function pointer in C or C++. Using a delegate allows the developer to encapsulate a reference to a method inside a delegate object. Delegates are basically used in these situations: - These are used to represent or refer to one or more functions. - These can only be used to define call-back methods. - In order to consume a delegate, we need to create an object to delegate. Syntax of delegates is : [modifier] delegate [returntype] [delegatename] ([parameterlist]); modifier: It is the required modifier which defines the access of delegate and it is optional to use. delegate: It is the keyword which is used to define the delegate. returntype: It is the type of value returned by the methods which the delegate will be going to call. It can be void. A method must have the same return type as the delegate. delegatename: It is the user-defined name or identifier for the delegate. parameterlist: This contains the parameters which are required by the method when called through the delegate. Please follow this link for more detail about delegates. What are Delegates and what are the uses of the Delegates Can a private virtual method can be overridden? You can’t even declare private virtual methods. The only time it would make any sense at all would be if you had: public class Outer { private virtual void Foo() {} public class Nested : Outer { private override void Foo() {} } } … that’s the only scenario in which a type has access to its parent’s private members. However, this is still prohibited: Test.cs(7,31): error CS0621: ‘Outer.Nested.Foo()’: virtual or abstract members cannot be private Test.cs(3,26): error CS0621: ‘Outer.Foo()’: virtual or abstract members cannot be private What are partial classes? When working on large projects, spreading a class over separate physical files enables multiple developers to work on it at the same time. Partial classes allow for a single Class file to be split up across multiple physical files, and all parts are combined when the application is compiled. Entire Class definition in one file (billing.cs). public class Billing { public bool Add() { } public bool Edit() { } } Same class in the above code split across multiple files. billing_1.cs public partial class Billing { public bool Add() { } } billing_2.cs public partial class Billing { public bool Edit() { } } What are generics in C#.NET?. Please follow this link for more detail about Generics. Generic in C# What is IEnumerable<> in C#? IEnumerable IEnumerable is an interface that defines one method GetEnumerator which returns an IEnumerator interface; this, in turn, allows read-only access to a collection. The collection that implements IEnumerable can be used with a foreach statement. IEnumerable enables you to iterate through the collection using a for-each loop. So if your intention is just that, IEnumerable can help you achieve it with minimum effort (implementation of only one method – GetEnumerator()). List, on the other hand, is a pre-implemented type-safe collection (using generic type) class available in the framework. This already has the implementation of IList, ICollection & IEnumerable. So functionally IEnumerable is a subset of List. Also, List is a class whereas IEnumerable would need implementation. Give me an example where internal preferred over. What kind of access specifier in C#?. Please follow this link for more detail about Access Modifiers in C#. What is the difference between late binding and early binding in C#? describes that compiler does not know what kind of object it is, what are all the methods and properties it contains. You have to declare it as an object, later you need get the type of the object, methods that are stored in it. Everything will be known at the run time.… - Application will run faster in Early binding, since no boxing or unboxing are done here. - Easier to write the code in Early binding, since the intellisense will be automatically populated - Minimal Errors in Early binding, since the syntax is checked during the compile time itself. - Late binding would support in all kind of versions, since everything is decided at the run time. - Minimal Impact of code in future enhancements, if Late Binding is used. - Performance will be code in early binding. What is IEnumerable, IEnumerator, ICollection and IList,IQueryable in C# Please refer to this article. Difference between Convert.ToString and .ToString() method The basic difference between them is “Convert.ToString(variable)” handles NULL values even if variable value become null but “variable.ToString()” will not handle NULL values it will throw a NULL reference exception error. So as a good coding practice using “convert” is always safe. /Returns a null reference exception for Name. string Name; object i = null; Name= i.ToString(); //Returns an empty string for name and does not throw an exception. string Name; object i = null; Name = Convert.ToString(i); What are Filters and Attributes in ASP.NET MVC? Please refer to this article LINQ Single vs SingleOrDefault vs First vs FirstOrDefault ViewData vs ViewBag vs TempData vs Session Please refer to this article What is Difference between Html.Partial and Html.RenderPartial in MVC? Please refer to this article DataSet Vs DataReader Please refer to this article Tempdata.Keep() and Tempdata.Peek() How to use left join in Entity Framework A Left outer join is a join in which each element of the first collection is returned, regardless of whether it has any correlated elements in the second collection. It can be performed by calling the DefaultIfEmpty() method on the results of a group join. var result= (from st in db.StudentMaster join cos in db.CourseMaster on st.CourseId equals cos.CourseId into studentdet from stu in studentdet.DefaultIfEmpty() select new { st.StudentName,cos.CourseName} ) Why multiple inheritance not allowed in C#? The one of problem for not supporting the multiple inheritance lies in the Diamond Problem. The diamond problem is an ambiguity that arises when two classes B and C inherit from A, and class D inherits from both B and C. If a method in D calls a method defined in A (and does not override the method), and B and C have overridden that method differently, then from which class does it inherit: B, or C? Please refer to this article for more details Why multiple inheritance not allowed in C#? Weak References in .Net Weak references in .Net create references to large objects in your application that are used infrequently so that they can be reclaimed by the garbage collector if needed. Let me explain you what basically Weak Reference is.. Please refer to this article for more details Difference between Model and ViewModel in MVC Please refer to this article : Abstraction and Encapsulation in C# Encapsulation is all about wrapping data and Abstraction is about hiding the implementation details. Encapsulation It. - Wrapping up a data member and a method together into a single unit (in other words class) is called Encapsulation. - Encapsulation is like enclosing in a capsule. That is enclosing the related operations and data related to an object into that object. - Encapsulation means hiding the internal details of an object, in other words how an object works. - Encapsulation prevents clients from seeing its inside view, where the behavior of the abstraction is implemented. - It is a technique used to protect the information in an object from another object. - Hide the data for security such as making the variables private, and expose the property to access the private data that will be public. - Encapsulation is like you can play music in either of mobile. It means this is the property of encapsulating members and functions.. - It is “To represent the essential feature without representing the background details.” - Lets you focus on what the object does instead of how it does it. - It provides you a generalized view of your classes or objects by providing relevant information. - It is the process of hiding the working style of an object, and showing the information of an object in an understandable manner. Please refer to this article for more details Difference between Hashtable and Dictionary in C# Please refer to this article for more details
https://blog.codehunger.in/c-interview-questions/
CC-MAIN-2021-43
refinedweb
4,249
57.16
The QToolTip class provides tool tips (sometimes called balloon help) for any widget or rectangular part of a widget. More... #include <qtooltip.h> Inherits Qt. List of all member functions. The tip is a short, one-line text reminding the user of the widget's or rectangle's function. It is drawn immediately below the region, in a distinctive black on yellow combination. In Motif style, Qt's tool tips look much like Motif's but feel more like Windows 95 tool tips. lets the mouse rest on a tip-equipped region for a second or so, and remains in active mode until the user either clicks a mouse button, presses a key, lets the mouse rest for five seconds, or moves the mouse outside all tip-equpped g is a QToolTipGroup * and already connected to the appropriate status bar: QToolTip::add( quitButton, "Leave the application", g, "Leave the application, without asking for confirmation" ); QToolTip::add( closeButton, "Close this window", g, "Close this window, without asking for confirmation" ); the above are one-liners and cover the vast majority of cases. The third and most general way to use QToolTip uses a pure virtual function to decide whether to pop up a tool tip. The tooltip/tooltip.cpp example demonstrates this too. This mode can be used to implement e.g. tips for text that can move as the user scrolls. To use QToolTip like this, you need to subclass QToolTip and reimplement maybeTip(). maybeTip() will be called when there's a chance that a tip should pop up. It must decide whether to show a tip, and possibly call add() with the rectangle the tip applies to, the tip's text and optionally the QToolTipGroup details. The tip will disappear once the mouse moves outside the rectangle you supply, and not reappear - maybeTip() will be called again if the user lets the mouse rest within the same rectangle again. You can forcibly remove the tip by calling remove() with no arguments. This is handy if the widget scrolls. Tooltips can be globally disabled using QToolTip::setEnabled(), or disabled in groups with QToolTipGroup::setEnabled(). See also QStatusBar, QWhatsThis, QToolTipGroup and GUI Design Handbook: Tool Tip Constructs a tool tip object. This is necessary only if you need tool tips on regions that can move within the widget (most often because the widget's contents can scroll). parent is the widget you want to add dynamic tool tips to and group (optional) is the tool tip group they should belong to. See also maybeTip(). [static] Adds a tool tip to a fixed rectangle within widget. text is the text shown in the tool tip. QToolTip makes a deep copy of this string. [static] Adds a tool tip to an entire widget, and to tool tip group group. text is the text shown in the tool tip and longText is the text emitted from group. QToolTip makes deep copies of both strings. Normally, longText is shown in a status bar or similar. [static] Adds a tool tip to widget. text is the text to be shown in the tool tip. QToolTip makes a deep copy of this string. This is the most common entry point to the QToolTip class; it is suitable for adding tool tips to buttons, check boxes, combo boxes and so on. [static] Adds a tool tip to widget, and to tool tip group group. text is the text shown in the tool tip and longText is the text emitted from group. QToolTip makes deep copies of both strings. Normally, longText is shown in a status bar or similar. [protected] Removes all tool tips for this tooltip's parent widget immediately. [static] Returns whether tooltips are enabled globally. See also setEnabled(). [static] Returns the font common to all tool tips. See also setFont(). Returns the tool tip group this QToolTip is a member of, of 0 if it isn't a member of any group. The tool tip group is the object responsible for relaying contact between tool tips and a status bar or something else which can show a longer help text. See also parentWidget() and QToolTipGroup. [static] Hides any tip that is currently being shown. Normally, there is no need to call this function; QToolTip takes care of showing and hiding the tips as the user moves the mouse. [virtual protected] This pure virtual function is half of the most versatile interface QToolTip offers. It is called when there is a chance that a tool tip should be shown, and must decide whether there is a tool tip for the point p in the widget this QToolTip object relates to. p is given in that widget's local coordinates. Most maybeTip() implementation will be of the form: if ( <something> ) { tip( <something>, <something> ); } The first argument to tip() (a rectangle) should include the p, or QToolTip, the user or both can be confused. See also tip(). [static] Returns the palette common to all tool tips. See also setPalette(). Returns the widget this QToolTip applies to. The tool tip is destroyed automatically when the parent widget is destroyed. See also group(). [static] Remove the tool tip from widget. If there are more than one tool tip on widget, only the one covering the entire widget is removed. [static] Remove the tool tip for rect from widget. If there are more than one tool tip on widget, only the one covering rectangle rect is removed. [static] Sets the all tool tips to be enabled (shown when needed) or disabled (never shown). By default, tool tips are enabled. Note that this function effects all tooltips in the entire application. See also QToolTipGroup::setEnabled(). [static] Sets the font for all tool tips to font. See also font(). [static] Sets the palette for all tool tips to palette. See also palette(). [protected] Pops up a tip saying text right now, and removes that tip once the cursor moves out of rectangle rect (which is given in the coordinate system of the widget this QToolTip relates to). The tip will not come back if the cursor moves back; your maybeTip() has to reinstate it each time. [protected] Pops up a tip saying text right now, and removes that tip once the cursor moves out of rectangle rect. The tip will not come back if the cursor moves back; your maybeTip() has to reinstate it each time. Search the documentation, FAQ, qt-interest archive and more (uses): This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved.
http://doc.trolltech.com/2.3/qtooltip.html
crawl-002
refinedweb
1,086
73.88
* Paul E. McKenney (paulmck@linux.vnet.ibm.com) wrote:> On Sun, Jan 10, 2010 at 11:30:16)> > I didn't expect quite this comprehensive of an implementation from the> outset, but I guess I cannot complain. ;-)> > Overall, good stuff.> > Interestingly enough, what you have implemented is analogous to> synchronize_rcu_expedited() and friends that have recently been added> to the in-kernel RCU API. By this analogy, my earlier semi-suggestion> of synchronize_rcu(0 would be a candidate non-expedited implementation.> Long latency, but extremely low CPU consumption, full batching of> concurrent requests (even unrelated ones), and so on.Yes, the main different I think is that the sys_membarrierinfrastructure focuses on IPI-ing only the current process runningthreads.> > A few questions interspersed below.> > > Changelog since v1:> > > > - Only perform the IPI in CONFIG_SMP.> > - Only perform the IPI if the process has more than one thread.> > - Only send IPIs to CPUs involved with threads belonging to our process.> > - Adaptative IPI scheme (single vs many IPI with threshold).> > - Issue smp_mb() at the beginning and end of the system call.> > > > Changelog since v2:> > > > - Iteration on min(num_online_cpus(), nr threads in the process),> > taking runqueue spinlocks, allocating a cpumask, ipi to many to the> > cpumask. Does not allocate the cpumask if only a single IPI is needed.> > > > > >.> > > > Just tried with a cache-hot kernel compilation using 6/8 CPUs.> > > > Normally: real 2m41.852s> > With the sys_membarrier+1 busy-looping thread running: real 5m41.830s> > > > So... 2x slower. That hurts.> > > > So let's try allocating a cpu mask for PeterZ scheme. I prefer to have a> > small allocation overhead and benefit from cpumask broadcast if> > possible so we scale better. But that all depends on how big the> > allocation overhead is.> > > > Impact of allocating a cpumask (time for 10,000,000 sys_membarrier> > calls, one thread is doing the sys_membarrier, the others are busy> > looping)). Given that it costs almost half as much to perform the> > cpumask allocation than to send a single IPI, as we iterate on the CPUs> > until we find more than N match or iterated on all cpus. If we only have> > N match or less, we send single IPIs. If we need more than that, then we> > switch to the cpumask allocation and send a broadcast IPI to the cpumask> > we construct for the matching CPUs. Let's call it the "adaptative IPI> > scheme".> > > > For my Intel Xeon E5405> > > > *This is calibration only, not taking the runqueue locks*> > > >> > > > Just doing cpumask alloc+IPI-many to T other threads:> > > > T=1: 0m21.778s> > T=2: 0m22.741s> > T=3: 0m22.185s> > T=4: 0m24.660s> > T=5: 0m26.855s> > T=6: 0m30.841s> > T=7: 0m29.551s> > > > So I think the right threshold should be 1 thread (assuming other> > architecture will behave like mine). So starting with 2 threads, we> > allocate the cpumask before sending IPIs.> > > > *end of calibration*> > > > Resulting adaptative scheme, with runqueue locks:> > > > T=1: 0m20.990s> > T=2: 0m22.588s> > T=3: 0m27.028s> > T=4: 0m29.027s> > T=5: 0m32.592s> > T=6: 0m36.556s> > T=7: 0m33.093.)> > The below data is for how many threads in the process?8 threads: one doing sys_membarrier() in a loop, 7 others waiting on avariable.> Also, is "top"> accurate given that the IPI handler will have interrupts disabled?Probably not. AFAIK. "top" does not really consider interrupts into itsaccounting. So, better take this top output with a grain of salt or two.> > > Cpu0 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st> > Cpu1 : 99.7%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.3%hi, 0.0%si, 0.0%st> > Cpu2 : 99.3%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.7%hi, 0.0%si, 0.0%st> > Cpu3 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st> > Cpu4 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st> > Cpu5 : 96.0%us, 1.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 2.6%si, 0.0%st> > Cpu6 : 1.3%us, 98.7%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st> > Cpu7 : 96.1%us, 3.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.3%hi, 0.3%si, 0.0%st> > > > The system call number is only assigned for x86_64 in this RFC patch.> > > > | 219 +++++++++++++++++++++++++++++++++++++++> > 2 files changed, 221 insertions(+)> > > > Index: linux-2.6-lttng/arch/x86/include/asm/unistd_64.h> > ===================================================================> > --- linux-2.6-lttng.orig/arch/x86/include/asm/unistd_64.h 2010-01-10 22:23:59.000000000 -0500> > +++ linux-2.6-lttng/arch/x86/include/asm/unistd_64.h 2010-01-10 22:29:30> > Index: linux-2.6-lttng/kernel/sched.c> > ===================================================================> > --- linux-2.6-lttng.orig/kernel/sched.c 2010-01-10 22:23:59.000000000 -0500> > +++ linux-2.6-lttng/kernel/sched.c 2010-01-10 23:12:35.000000000 -0500> > @@ -119,6 +119,11 @@> > */> > #define RUNTIME_INF ((u64)~0ULL)> > > > +/*> > + * IPI vs cpumask broadcast threshold. Threshold of 1 IPI.> > + */> > +#define ADAPT_IPI_THRESHOLD 1> > +> > static inline int rt_policy(int policy)> > {> > if (unlikely(policy == SCHED_FIFO || policy == SCHED_RR))> > @@ -10822,6 +10827,220 @@();> > +}> > +> > +/*> > + * Handle out-of-mem by sending per-cpu IPIs instead.> > + */> > Good handling for out-of-memory errors!> > > +static void membarrier_cpus_retry(int this_cpu)> > +{> > + struct mm_struct *mm;> > + int cpu;> > +> > + for_each_online_cpu(cpu) {> > + if (unlikely(cpu == this_cpu))> > + continue;> > + spin_lock_irq(&cpu_rq(cpu)->lock);> > + mm = cpu_curr(cpu)->mm;> > + spin_unlock_irq(&cpu_rq(cpu)->lock);> > + if (current->mm == mm)> > + smp_call_function_single(cpu, membarrier_ipi, NULL, 1);> > There is of course some possibility of interrupting a real-time task,> as the destination CPU could context-switch once we drop the ->lock.> Not a criticism, just something to keep in mind. After all, the only ways> I can think of to avoid this possibility do so by keeping the CPU from> switching to the real-time task, which sort of defeats the purpose. ;-)Absolutely. And it's of no use to add a check within the IPI handler toverify if it was indeed needed, because all we would skip is a simplesmp_mb(), which is relatively minor in terms of overhead compared to theIPI itself.> > > + }> > +}> > +> > +static void membarrier_threads_retry(int this_cpu)> > +{> > + struct mm_struct *mm;> > + struct task_struct *t;> > + struct rq *rq;> > + int cpu;> > +> > +)> > + smp_call_function_single(cpu, membarrier_ipi, NULL, 1);> > Ditto.> > > + }> > +}> > +> > +static void membarrier_cpus(int this_cpu)> > +{> > + int cpu, i, cpu_ipi[ADAPT_IPI_THRESHOLD], nr_cpus = 0;> > + cpumask_var_t tmpmask;> > + struct mm_struct *mm;> > +> > + /* Get CPU IDs up to threshold */> > + for_each_online_cpu(cpu) {> > + if (unlikely(cpu == this_cpu))> > + continue;> > OK, the above "if" handles the single-threaded-process case.> No. See + if (unlikely(thread_group_empty(current))) + return 0;in the caller below. The if you present here simply ensures that wedon't do a superfluous function call on the current thread. It'sprobably not really worth it for a slow path though.> The UP-kernel case is handled by the #ifdef in sys_membarrier(), though> with a bit larger code footprint than the embedded guys would probably> prefer. (Or is the compiler smart enough to omit these function given no> calls to them? If not, recommend putting them under CONFIG_SMP #ifdef.)Hrm, that's a bit odd. I agree that UP systems could simply return-ENOSYS for sys_membarrier, but then I wonder how userland coulddistinguish between:- an old kernel not supporting sys_membarrier() -> in this case we need to use the smp_mb() fallback on the read-side and in synchronize_rcu().- a recent kernel supporting sys_membarrier(), CONFIG_SMP -> can use the barrier() on read-side, call sys_membarrier upon update.- a recent kernel supporting sys_membarrier, !CONFIG_SMP -> calls to sys_membarrier() are not required, nor is barrier().Or maybe we just postpone the userland smp_mb() question to anotherthread. This will eventually need to be addressed anyway. Maybe with avgetmaxcpu() vsyscall.> > > + spin_lock_irq(&cpu_rq(cpu)->lock);> > + mm = cpu_curr(cpu)->mm;> > + spin_unlock_irq(&cpu_rq(cpu)->lock);> > + if (current->mm == mm) {> > +_cpus_retry(this_cpu);> > + return;> > + }> > + for (i = 0; i < ADAPT_IPI_THRESHOLD; i++)> > + cpumask_set_cpu(cpu_ipi[i], tmpmask);> > + /* Continue previous online cpu iteration */> > + cpumask_set_cpu(cpu, tmpmask);> > + for (;;) {> > + cpu = cpumask_next(cpu, cpu_online_mask);> > + if (unlikely(cpu == this_cpu))> > + continue;> > + if (unlikely(cpu >= nr_cpu_ids))> > + break;> > + spin_lock_irq(&cpu_rq(cpu)->lock);> > + mm = cpu_curr(cpu)->mm;> > + spin_unlock_irq(&cpu_rq(cpu)->lock);> > + if (current->mm == mm)> > + cpumask_set_cpu(cpu, tmpmask);> > + }> > + smp_call_function_many(tmpmask, membarrier_ipi, NULL, 1);> > + free_cpumask_var(tmpmask);> > + }> > +}> > +> > +static void membarrier_threads(int this_cpu)> > +{> > + int cpu, i, cpu_ipi[ADAPT_IPI_THRESHOLD], nr_cpus = 0;> > + cpumask_var_t tmpmask;> > + struct mm_struct *mm;> > + struct task_struct *t;> > + struct rq *rq;> > +> > + /* Get CPU IDs up to threshold */> > +) {> > I do not believe that the above test is gaining you anything. It would> fail only if the task switched since the __task_rq_unlock(), but then> again, it could switch immediately after the above test just as well.OK. Anyway I think I'll go the the shorter implementation using themm_cpumask, and add an additionnal ->mm check with spinlocks.> > > +_threads_retry(this_cpu);> > + return;> > + }> > + for (i = 0; i < ADAPT_IPI_THRESHOLD; i++)> > + cpumask_set_cpu(cpu_ipi[i], tmpmask);> > + /* Continue previous thread iteration */> > + cpumask_set_cpu(cpu, tmpmask);> > + list_for_each_entry_continue)> > Ditto.> > > + cpumask_set_cpu(cpu, tmpmask);> A> + }> > + smp_call_function_many(tmpmask, membarrier_ipi, NULL, 1);> > + free_cpumask_var(tmpmask);> > + }> > +}> > +> > +/*> > + *)> > + *> > + * We do not use mm_cpumask because there is no guarantee that each architecture> > + * switch_mm issues a smp_mb() before and after mm_cpumask modification upon> > + * scheduling change. Furthermore, leave_mm is also modifying the mm_cpumask (at> > + * least on x86) from the TLB flush IPI handler. So rather than playing tricky> > + * games with lazy TLB flush, let's simply iterate on online cpus/thread group,> > + * whichever is the smallest.> > + */> > +SYSCALL_DEFINE0(membarrier)> > +{> > +#ifdef CONFIG_SMP> > + int this_cpu;> > +> > + if (unlikely(thread_group_empty(current)))> > + return 0;> > +> > + rcu_read_lock(); /* protect cpu_curr(cpu)-> and rcu list */> > + preempt_disable();> > Hmmm... You are going to hate me for pointing this out, Mathieu, but> holding preempt_disable() across the whole sys_membarrier() processing> might be hurting real-time latency more than would unconditionally> IPIing all the CPUs. :-/Hehe, I pointed this out myself a few emails ago :) This is why Istarted by using raw_smp_processor_id(). Well, let's make it simplefirst, and then we can improve if needed.> > That said, we have no shortage of situations where we scan the CPUs with> preemption disabled, and with interrupts disabled, for that matter.Yep.Thanks,Mathieu> > > + /*> > + * Memory barrier on the caller thread _before_ sending first IPI.> > + */> > + smp_mb();> > + /*> > + * We don't need to include ourself in IPI, as we already> > + * surround our execution with memory barriers.> > + */> > + this_cpu = smp_processor_id();> > + /* Approximate which is fastest: CPU or thread group iteration ? */> > + if (num_online_cpus() <= atomic_read(¤t->mm->mm_users))> > + membarrier_cpus(this_cpu);> > + else> > + membarrier_threads(this_cpu);> > + /*> > + * Memory barrier on the caller thread _after_ we finished> > + * waiting for the last IPI.> > + */> > + smp_mb();> > + preempt_enable();> > + rcu_read_unlock();> > +#endif /* #ifdef CONFIG_SMP */> > + return 0;> > +}> > +> > #ifndef CONFIG_SMP> > > > int rcu_expedited_torture_stats(char *page)> > -- > >
http://lkml.org/lkml/2010/1/12/147
CC-MAIN-2017-09
refinedweb
1,778
57.37
BGI library – Part 4 (BGI Image Handling Functions & Linear Drawing Functions) BGI Image Handling Functions and Linear Drawing Functions a) linerel (int dx, int dy) b) lineto (int x, int y) c) moverel (int dx, int dy) d) moveto (int x, int y) e) imagesize (int left, int top, int right, int bottom) f) getimage (int left, int top, int right, int bottom, void far *bitmap) g) putimage (int left, int top, void far *bitmap, int op) a) linerel (int dx, int dy) } b) lineto (int x, int y) This function draws a line from the current position to (x, y). Example : #include <graphics.h> return 0; } c) moverel (int dx, int dy) This function moves the cursor by a relative distance of dx along x-axis and dy along y-axis. Example : #include <graphics.h> moverel(100, 100); return 0; } d) moveto (int x, int y) This function moves the cursor to (x, y). Example : #include <graphics.h> moveto(100, 100); lineto (100, 200); getch(); return 0; } e) imagesize (int left, int top, int right, int bottom) This function determines the size of the memory space required to store a bitmap image. f) getimage (int left, int top, int right, int bottom, void far *bitmap) This function copies an image from the screen to memory. The values stored by its arguments have been listed below : # left : x-coordinate of the top-left corner of the block. # top : y-coordinate of the top-left corner of the block. # right : x-coordinate of the bottom-right corner of the block. # bottom : y-coordinate of the bottom-right corner of the block. # bitmap : the address of the memory location where the image would be stored. g) putimage (int left, int top, void far *bitmap, int op) This function puts an image (that was previously saved with getimage) back to the screen, with the upper-left corner of the image placed at (left, top). The address of the memory location from where the image is to be retrieved is represented by bitmap. The last argument op specifies a combination operator that controls how the color for each destination pixel on the screen is computed. There are five possible values for op. These are listed below : Example : #include <graphics.h> #include <conio.h> #include <dos.h> #include <malloc.h> #include <iostream.h> #include <stdlib.h> void *buff; int main() { int gd = DETECT, gm, errorcode; initgraph (&gd, &gm, “c://turboc3//bgi”); errorcode = graphresult (); if (errorcode != grOk) { cout<<“Graphics error :: “<<grapherrormsg (errorcode); cout<<“n Press any key to Exit…”; getch(); exit(1); } /* setbkcolor (int color) : Sets the current background color to the color specified by color. */ setbkcolor (LIGHTCYAN); setcolor (RED); setfillstyle (SOLID_FILL, RED); circle (100, 100, 50); floodfill (100, 100, RED); setcolor (WHITE); outtextxy (75, 100, “MANISH”); buff = malloc (imagesize (50, 50, 150, 150)); getimage (50, 50, 150, 150, buff); putimage (50, 50, buff, XOR_PUT); /* Erase image */ putimage (100, 100, buff, COPY_PUT); putimage (300, 100, buff, COPY_PUT); putimage (500, 100, buff, COPY_PUT); putimage (100, 300, buff, COPY_PUT); putimage (300, 300, buff, COPY_PUT); putimage (500, 300, buff, COPY_PUT); getch(); closegraph(); return 0; } Now, we have discussed all the functions in BGI library. Using all the functions listed in the four articles of this series, you can create high quality graphical applications of several types. To see some examples of this, you can view the following programs created using BGI graphics : Did you like this article? Tell us in your comments below. BGI graphics won't work on Visual Studio. You should use Turbo C++ for using BGI. Since you use Visual Studio, Windows GDI is what you need. We will post articles on GDI soon. How do I use it? I use Visual Studio 2010 Express and I don't have graphics.h file. This article series on BGI library functions is really great. I have been assigned a project create some cool graphics using BGI and these articles of yours helped me accomplish this difficult task. Thanks a lot 🙂
https://www.wincodebits.in/2015/07/bgi-library-functions-part-4-image-handling-functions-linear-drawing-functions.html
CC-MAIN-2018-34
refinedweb
663
62.98
. Aren't you afraid that you will be testing nHibernate then? It has it's own set of tests. But hey, didn't the intermediate unit test help you create your fluent api more quickly? Not sure what you're saying here, maybe it's a lack of experience with NHibernate. Are you saying that your Mock hid the fact that you had a problem because you mocked something out behind the scenes? If so, in my experience I tend to write two sets of tests (potentially separate test libraries). One that does the unit testing with mocking and another that is a straight integration that ensures the whole thing is wired up properly. We have an FI that builds up the XML mapping for NHibernate to help take away some of the pain of building, maintaining, and testing the XML mappings for NHibernate. Unfortunately, one of the things I wrote generated an XML element for an entity's NHibernate HBM mapping XML file, but it build the XML incorrectly. The test I wrote verified that the generator produced the correct XML which was, in fact, NOT correct. So it was a cycle of stupidity where both the code and the test were wrong and the test only verified that the code did what it was supposed to do; That is, generate incorrect XML. Not until actually loading the HBM XML into NHibernate did we (Jeremy) detect my mistake. So Jeremy is calling into question whether that test I wrote was worth anything or not. Perhaps it provides some valid in that it was a basic sanity check, but it didn't prove that the mapping worked. Is that another test? Perhaps. Or maybe it's just more efficient to try to load the HBM into NHibernate's config object and see what's up and cut out the (incorrect) middle-man. @cristian, Sure, it did help to flush out the syntax. What I've learned with Fluent Interfaces though is that it's best to go with more coarse grained tests to avoid over specification. @Chad, Jeremy made the exact same mistake earlier, only you weren't around to see it blow up in my face;) @Ruud, I'm not testing NHibernate per se, I'm testing whether or not my particular interaction with NHibernate was done correctly (and it wasn't). Are we gonna get to see how your fluent interface syntex look like to configure Mapping files sometime soon? I used Object Mapper in the past which is a free tool to create mapping files and it works fine but again each time you need to change anything, you have to deal with xml files. I'd be nice to see how much readability we get with your fluent interface syntex. @Sheraz, We will be Open Sourcing it at some point, but here's a sample (we generate the database from the Domain Model, so the column names are derived from the object properties): public class SalesTransactionMap : DomainClassMap<SalesTransaction> { public SalesTransactionMap() { Map(x => x.BuyerName); Map(x => x.DeliveryDate); Map(x => x.DeliveryNotes); Map(x => x.ExpirationDate); Map(x => x.OrderDate); Map(x => x.QuoteDate); Map(x => x.QuoteOrder); Map(x => x.TransactionId).ValueIsAutoNumber(); Map(x => x.CustomerPurchaseOrder); Map(x => x.JobName); References(x => x.Customer); References(x => x.CustomerContact); References(x => x.SalesRepresentative); References(x => x.Warehouse); References(x => x.CustomerJob); Component<DeliveryDetails>(x => x.DeliveryDetails, m => configureDetails(m)); HasMany<SalesLineItem>(x => x.Items); } private void configureDetails(ComponentPart<DeliveryDetails> deliveryMap) deliveryMap.Map(x => x.Mechanism); deliveryMap.Map(x => x.DeliveryName); deliveryMap.Map(x => x.Distance); deliveryMap.Map(x => x.CarrierName); deliveryMap.References(x => x.CustomerDeliveryAddress); deliveryMap); }); } Could we also see the xml it ends up generating. Out of interest, is the plan to make it support mapping things like inheritance and so on longer term? Be interested to see the result but I must admit I don't find that the complexity/readability of the HBM mapping files themselves is too bad, other than in situations where the mappings involved are inherent complex (inheritance and some of the more complex dictionary mappings spring to mind). @Colin, We hit inheritance yesterday. I stuffed in support for the one table per hierarchy (all I needed in my case), and it worked out well. "I must admit I don't find that the complexity/readability of the HBM mapping files themselves is too bad," Given the choice between writing Xml with strings or ripping through something with Intellisense, which would you rather have? Plus we're doing quite a bit of "convention over configuration" along the way as well (setting up id's, foreign key naming, taking care of enumeration mapping, pulling some metadata out of embedded validation stuff, etc.) The inheritance is here: public class CategoryMap : DomainClassMap<Category> public CategoryMap() Map(x => x.Description); DiscriminateSubClassesOnColumn<string>("Type") .SubClass<WidgetCategory>().IsIdentifiedBy("Widget").MapSubClassColumns(map => { }) .SubClass<AreaCategory>().IsIdentifiedBy("Area") .MapSubClassColumns(map => { map.Map(x => x.TrackThickness); }) .SubClass<LengthCategory>().IsIdentifiedBy("Length") map.Map(x => x.TrackOD); }); > Given the choice between writing Xml with strings or ripping through something with Intellisense, which would you rather have? I get you, but with the schema setup I don't really find it that difficult to write the XML. Deciding how to map from my classes to the tables is sometimes tricky, but once we've decided actually writing out the mapping is usually relatively easy and afterwards I've generally found the mapping files readable. It would be useful if we got refactoring support though, but its not a killer. > Plus we're doing quite a bit of "convention over configuration" along the way as well (setting up id's, foreign key naming, taking care of enumeration mapping, pulling some metadata out of embedded validation stuff, etc.) Interesting, just being nosy what do you mean on the enumeration mapping? I'm also guessing on the validation this is where you've used attributes to define your simpler validation
http://codebetter.com/blogs/jeremy.miller/archive/2008/03/10/test-what-you-re-testing.aspx
crawl-001
refinedweb
994
55.13
I moved my SW install from a server that was using 9675 to a new server using port 80. When ever a user receives an email from the system it still shows the URL with port 9675. How can I remove this so it just defaults to port 80? Thanks. 13 Replies Jan 29, 2012 at 8:29 UTC I assume you mean the Content of the email they are receiving. You can change the URL in the Email Template under Settings - Helpdesk Settings Go to the Ticket Notification Templates and view/edit. Change the URL there. Jan 29, 2012 at 8:29 UTC Here is what is in the emails Ticket URL: http:/ App: http:/ I want to remove the 9675. Can I do that? When you click the link the server isn't answering on it. When I remove 9675 and just default to 80 everything works fine. Jan 29, 2012 at 8:42 UTC That should be "as noted above" in the Template. Isn't it, or am I missing something? Additional;: I removed all references from my Tickets (Template) as I don't use the User Portal. In turn can't check against mine i.e. the line with the URL. Should be close to the bottom of Template from memory. Jan 30, 2012 at 12:40 UTC 1. Be sure that your Email URL Hostname is correct: Settings - Email Settings - Show Additional Settings - Email URL Hostname In your case that would probably be SRVMGMTITP01 and is probably already correct. This entry should NOT contain a port number. 2. Be sure that you've changed the port on the new SW server to 80: On the SW server, right-click on the SW icon in the system tray and choose Preferences. Be sure the port numbers are correct at the bottom of that box. Since it seem to work when you remove the 9675, this setting may already be correct as well. 3. Be sure your email template is referring to {{ticket.url}} Let me know if you need any more details on the above. -Wayne Jan 30, 2012 at 12:44 UTC The email template has a variable such as: ticket.portal_url - "URL link to this ticket in the Spiceworks Portal" (http:/ But this is not where the problem is... this variable get the URL from Spiceworks dataset, thus when you changed your port to 80 somehow it did not change the variable data? I don't have a straight forward answer for this but check out Additional Email Settings under >Setting>Email settings. Maybe try and re-install the latest version over your current setup. Hope you win with this. Jan 30, 2012 at 1:00 UTC Ug, I'm an idiot, I totally forgot about there. Good one WouterNel. Sorry about that JonMH, WouterNel is correct in that. Cheers Jan 31, 2012 at 4:04 UTC I keep getting the port 9675 showing in the emails generated by the app. How do I remove so the link just uses port 80? Ticket closed by Jonathan. On Jan 29, 2012 @ 07:59 pm, Jonathan wrote: Assigned to it Ticket Overview Priority: Med Creator: Assignee: it networking Ticket URL: http:/ App: http:/ Ticket Commands let you take control of your help desk remotely. Check the Spiceworks community for a full list of available commands and usage: http:/ Examples: #close, #add 5m, #assign to bob, #priority high Ticket History ________________________________________ On Jan 31, 2012 @ 02:44 pm, Jonathan wrote: Ticket closed. ________________________________________ On Jan 29, 2012 @ 07:59 pm, Jonathan wrote: ________________________________________ Jan 31, 2012 at 4:10 UTC Can you post the template code? Settings - Helpdesk Settings Go to the Ticket Notification Templates and view/edit. Jan 31, 2012 at 4:14 UTC I suspect it is indeed in the Email Template as initially noted. View\Edit the Template and then Search it (CTRL-F) for 9675 Jan 31, 2012 at 4:40 UTC I only have the standard;font-family:trebuchet ms;"> {% if message %} <p style="font-weight:bold">{{message}}</p> {% endif %} {%.<;"> <h2 style="margin-bottom:5px; margin-top:0px; font-size:11px;">Ticket Overview</h2> Priority: {{ticket.priority | escape}}<br/> Creator: {{ticket.creator.full_name_or_email | escape}}<br/> Assignee: {{ticket.assignee.full_name_or_email | escape}}<br/> Ticket URL: <a href="{{ticket.url | escape}}">{{ticket.url | escape}}</a><br/> App: <a href="{{app_url | escape}}">{{app_url | escape}}</a><br/> <br/> Ticket Commands let you take control of your help desk remotely. Check the Spiceworks community for a full list of available commands and usage:<br/> <a href="http:/ <br/> Examples: <tt>#close, #add 5m, #assign to bob, #priority high</tt> </div> {% endif %} {% if ticket.previous_comments != empty %} <br/> /> <a href="{{ticket.portal_url | escape}}">{{ticket.portal_url | escape}}</a><br/> {% endunless %} </td> </tr> </table> </body> </html> Jan 31, 2012 at 4:47 UTC Okay, that's all correct. Now how 'bout a screen shot of these settings? Settings - Email Settings - Show Additional Settings Feb 5, 2012 at 10:20 UTC Here you go.
https://community.spiceworks.com/topic/194427-link-that-helpdesk-is-sending-user-shows-port-9675
CC-MAIN-2017-22
refinedweb
831
73.68
As of now this is in sec. 13-2 of the draft spec: Created attachment 719598 [details] [diff] [review] WIP 1 - many broken tests Work to do that I can think of offhand: - `() => x` arrow functions with no arguments don't work yet - `(...rest) => x` arrow functions with rest arguments don't work yet - 'this' should be lexical, unlike other functions - arrow functions should have no 'arguments' binding, unlike other functions - FunctionToString is moderately broken Created attachment 719637 [details] [diff] [review] WIP 2 - implement arrow functions with block bodies needs tests. More TODO items: - tests for arrow-block functions - Reflect.parse support - ban yield in arrow functions fiveop, a volunteer, wants to take on the "() => body" syntax. Just did Reflect.parse; I'll work on rest arguments next... Created attachment 719756 [details] [diff] [review] WIP 3 - adding rest parameters and Reflect.parse support Created attachment 720109 [details] [diff] [review] WIP 4 - tweaks for rest params, more tests, ban 'yield' in arrow functions Created attachment 721380 [details] [diff] [review] v5 OK, I'm punting lexical 'this' to follow-up bug 848062 so that this can land. Comment on attachment 721380 [details] [diff] [review] v5 Review of attachment 721380 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/frontend/Parser.cpp @@ +1483,4 @@ > { > + FunctionBox *funbox = pc->sc->asFunctionBox(); > + > + bool parenFree = false; Maybe name this 'parenFreeArrow' to be more descriptive. @@ +1603,5 @@ > if (!defineArg(funcpn, name, disallowDuplicateArgs, &duplicatedArg)) > return false; > > if (tokenStream.matchToken(TOK_ASSIGN)) { > + JS_ASSERT(!parenFree); What guarantees this assert? @@ +4803,5 @@ > #endif > > + // Save two source locations, both just in case we turn out to be in an > + // arrow function. 'offset' is for FunctionToString. 'start' is for > + // rewinding and reparsing arguments. This seems unfortunate, can you describe a bit more in the comment why this is necessary? @@ +4807,5 @@ > + //. (In reply to Brian Hackett (:bhackett) from comment #9) > ::: js/src/frontend/Parser.cpp > > if (tokenStream.matchToken(TOK_ASSIGN)) { > > + JS_ASSERT(!parenFree); > > What guarantees this assert? I added a comment: // A default argument without parentheses would look like: // a = expr => body, but both operators are right-associative, so // that would have been parsed as a = (expr => body) instead. // Therefore it's impossible to get here with parenFreeArrow. JS_ASSERT(!parenFreeArrow); > > + //. Yup. Done. This appears to have landed. Please add the URL for the patch. Comment on attachment 721380 [details] [diff] [review] v5 Review of attachment 721380 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/frontend/TokenStream.h @@ +558,5 @@ > } > > TokenKind peekToken() { > if (lookahead != 0) { > + JS_ASSERT(lookahead <= maxLookahead); Given the following line, shouldn't this be |1 <= maxLookahead|? @@ +573,5 @@ > } > > TokenKind peekTokenSameLine(unsigned withFlags = 0) { > + if (lookahead != 0) { > + JS_ASSERT(lookahead <= maxLookahead); Ditto. @Comment 12: I think both asserts are unnecessary. We (should) make sure that |lookahead| stays within bounds in the ungetToken() function. And as soon as |lookahead > 0| holds, we know that in |tokens[cursor + 1]| is the token after current Token, so executing the 'then' clause is fine. > @Comment 12: I think both asserts are unnecessary. I agree :) Feel free to make that change in bug 846933.
https://bugzilla.mozilla.org/show_bug.cgi?id=846406
CC-MAIN-2017-34
refinedweb
494
58.08
In the previous article we discussed about the fundamentals of IOC and DI design patterns. In case you have missed it you can read more about it, by clicking hereIn the same article we also discussed about how Windsor can be used to solve this problem. In this article we will take up a simple example and try to implement DI using unity application blocks thus resulting in loosely coupled architecture. The problem 'Id 'Concrete Creational patterns “Microsoft.Practices.Unity†namespace in to your code as shown in figure <<here>>. ©2016 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/uploadfile/shivprasadk/di-using-unity-application-blocks/
CC-MAIN-2016-22
refinedweb
101
65.73
Data Structure Interview Questions in Java Data Structure Interview Questions in JAVA “Data Structure Interview Questions in Java” Data Structure is again one of the toughest and important subjects for the interview. So you have to prepare well for this subject if you want to clear the interview. Here you will get the most important data structure interview questions in JAVA. This will help you in preparing for the interview. A data structure is a particular way of organizing data on a computer so that it can be used effectively. If you want to know more about data structure, you can through the button below. What is Data Structures? Data Structure is a way to store and organize data so that it can be used efficiently. Our Data Structure tutorial includes all topics of Data Structure such as Array, Pointer, Structure, Linked List, Stack, Queue, Graph, Searching, Sorting, Programs, etc. The data structures provided by the Java utility package are very powerful and perform a wide range of functions. These data structures consist of the following interface and classes − - Enumeration - BitSet - Vector - Stack - Dictionary - Hashtable - Properties Read more about Data structure Interview Questions in Java: Data Structure Interview Questions in JAVA 1. What is a Stack? Solution:- Stack is an ordered list in which, insertion and deletion can be performed only at one end that is called the top. It is a recursive data structure having pointer to its top element. The stack is sometimes called as Last-In-First-Out (LIFO) list i.e. the element which is inserted first in the stack will be deleted last from the stack. 2. How do you find middle element of linked list in one pass? Solution :-. 3. What are linear and non-linear types of data Structures? Also, How is an Array different from Linked List? Solution:- -. 4.What is a postfix expression? Solution:- An expression in which operators follow the operands is known as postfix expression. The main benefit of this form is that there is no need to group sub-expressions in parentheses or to consider operator precedence. The expression “a + b” will be represented as “ab+” in postfix notation. 5. What are Infix, prefix, Postfix notations? Solution:- - 6.What is a multidimensional array? Solution:- The multidimensional array can be defined as the array of arrays in which, the data is stored in tabular form consists of rows and columns. 2D arrays are created to implement a relational database lookalike data structure. It provides ease of holding the bulk of data at once which can be passed to any number of functions wherever required. 7. Calculate the address of a random element present in a 2D array, given base address as BA. Solution:- Row-Major Order: If array is declared as a[m][n] where m is the number of rows while n is the number of columns, then address of an element a[i][j] of the array stored in row major order is calculated as, Address(a[i][j]) = B. A. + (i * n + j) * size Column-Major Order: If array is declared as a[m][n] where m is the number of rows while n is the number of columns, then address of an element a[i][j] of the array stored in column major order is calculated as Address(a[i][j]) = ((j*m)+i)*Size + BA. 8.Which data structures are used for BFS and DFS of a graph? Solution:- - Queue is used for BFS - Stack is used for DFS. DFS can also be implemented using recursion (Note that recursion also uses function call stack). 9.Write the JAVA program to insert a node in circular singly list at the beginning.? Solution:- public class InsertAtStart { //Represents the node of list. public class Node{ int data; Node next; public Node(int data) { this.data = data; } } //Declaring head and tail pointer as null. public Node head = null; public Node tail = null; //This function will add the new node at the end of the list. public void addAtStart(int data){ //Create new node Node newNode = new Node(data); //Checks if the list is empty. if(head == null) { //If list is empty, both head and tail would point to new node. head = newNode; tail = newNode; newNode.next = head; } else { //Store data into temporary node Node temp = head; //New node will point to temp as next node newNode.next = temp; //New node will be the head node head = newNode; //Since, it is circular linked list tail will point to head. tail.next = head; } } //Displays all the nodes in the list public void display() { Node current = head; if(head == null) { System.out.println("List is empty"); } else { System.out.println("Adding nodes to the start of the list: "); do{ //Prints each node by incrementing pointer. System.out.print(" "+ current.data); current = current.next; }while(current != head); System.out.println(); } } public static void main(String[] args) { InsertAtStart cl = new InsertAtStart(); //Adding 1 to the list cl.addAtStart(1); cl.display(); //Adding 2 to the list cl.addAtStart(2); cl.display(); //Adding 3 to the list cl.addAtStart(3); cl.display(); //Adding 4 to the list cl.addAtStart(4); cl.display(); } } 10.Define the tree data structure. Solution:- The Tree is a recursive data structure containing the set of one or more data nodes where one node is designated as the root of the tree while the remaining nodes are called as the children of the root. The nodes other than the root node are partitioned into the nonempty sets where each one of them is to be called sub-tree. 11. 12.How to know if a linked list has a loop? Solution:-. 13.Differentiate among cycle, path, and circuit? Solution:- - Path: A Path is the sequence of adjacent vertices connected by the edges with no restrictions. - Cycle: A Cycle can be defined as the closed path where the initial vertex is identical to the end vertex. Any vertex in the path can not be visited twice - Circuit: A Circuit can be defined as the closed path where the intial vertex is identical to the end vertex. Any vertex may be repeated 14.Write a Java program to sort an array using the Bubble Sort algorithm? Solution:- package test; import java.util.Arrays; public class BubbleSort { public static void main(String args[]) { int[] unsorted = {32, 39,21, 45, 23, 3}; bubbleSort(unsorted); int[] test = { 5, 3, 2, 1}; bubbleSort(test); } public static void bubbleSort(int[] unsorted){ System.out.println("unsorted array before sorting : " + Arrays.toString(unsorted)); for(int i=0; i for(int j= 1; j if(unsorted[j-1] > unsorted[j]){ int temp = unsorted[j]; unsorted[j] = unsorted[j-1]; unsorted[j-1] = temp; } } System.out.printf("unsorted array after %d pass %s: %n", i+1, Arrays.toString(unsorted)); } } } 15. What are the advantages of Binary search over linear search? Solution:- There are relatively less number of comparisons in binary search than that in linear search. In average case, linear search takes O(n) time to search a list of n elements while Binary search takes O(log n) time to search a list of n elements. 16. How to reverse String in Java language? Solution:- There are many ways available to reverse Sting in Java or other programming languages, one could do so by using built-in functions such as reverse() from StringBuffer class. 17. What do you understand by a Linked List and What are its different types? Solution:-. 18. What do you understand by Stack and where can it be used? Solution:-. Login/Signup to comment
https://prepinsta.com/interview-preparation/technical-interview/data-structure-interview-questions-in-java/
CC-MAIN-2021-17
refinedweb
1,258
64.3
SYCL Tutorial 1: The Vector Addition 19 May. Articles in this series - Tutorial 1: The Vector Addition - Tutorial 2: The Vector Addition Re-visited - Tutorial 3: Integrating SYCL Into Stanford University Unstructured 1 Introduction In this post of the SYCL™ Tutorial series, we illustrate how to implement a simple three-way vector addition using SYCL, and how this code can run on different platforms even if there is no GPU or an OpenCL™ implementation available. The provisional spec for SYCL is available in the official Khronos website. Rather than re-inventing the wheel, we are using a very nice tutorial from Simon McIntosh-Smith's (from the University of Bristol) OpenCL lectures, that can be found in his github webpage. The OpenCL C and OpenCL C++ Wrappers version of all the exercises is available there for reference. In this post, we will use the Exercise 05 of the tutorial mentioned above, which is sufficiently simple for our current introductory purposes. 2 Vector Addition in OpenCL First of all, let's recap how vector addition can be implemented using OpenCL. Remember that in classic OpenCL, we need two pieces of code: the host code, with our main program and the OpenCL host functions; and the device code, which contains our kernel. The kernel is typically stored in a string, which is built with the OpenCL implementation built-in compiler via an API call (e.g, clCreateProgramWithSource). The data we use in the device has to be copied over with API calls, and we even need to specify which device, platform and many other things in order to make the program work. The whole code can be seen here, too long to write it here! Note that from line 100 to 213, everything is set-up code (platform, device, kernel, buffer, parameters, etc). Using the C++ wrappers, that have been around for a while, slightly reduces the problem here. Now the set-up code has been reduced from 83 to 63 lines of code. However, note that even if you are not seeing it, it is still there, just hidden from view in a set of templates inside a header file - so good luck with the debugging! Also, the kernel code is still in a separate file, which contains an additional 11 lines (not including comments). 3 The SYCL Way SYCL offers simple abstractions to core OpenCL features. Rather than just putting C++ classes on top of OpenCL objects, these abstractions have been designed with C++ and Object Oriented programming paradigms in mind. The snippet shown below illustrates the implementation of the three-way vector addition using SYCL. If you are a lucky Codeplay employee, you can even build and run it in your favourite development platform using our prototype implementation! So in SYCL, without sparing any indentation and commenting opportunities, we have the entire OpenCL code in 25 lines, including even the device kernel that we were not taking into account in previous samples. That is a great reduction over bare OpenCL C, and even the C++ wrappers! Note also that the kernel is inlined with the code: The kernel is still valid C++ code, and we can still run it on the host if there is no device available or if we want to debug it. #include <sycl.hpp> using namespace cl::sycl #define TOL (0.001) // tolerance used in floating point comparisons #define LENGTH (1024) // Length of vectors a, b and c int main() { std::vector h_a(LENGTH); // a vector std::vector h_b(LENGTH); // b vector std::vector h_c(LENGTH); // c vector std::vector h_r(LENGTH, 0xdeadbeef); // d vector (result) // Fill vectors a and b with random float values int count = LENGTH; for (int i = 0; i < count; i++) { h_a[i] = rand() / (float)RAND_MAX; h_b[i] = rand() / (float)RAND_MAX; h_c[i] = rand() / (float)RAND_MAX; } { // Device buffers buffer d_a(h_a); buffer d_b(h_b); buffer d_c(h_c); buffer d_r(h_d); queue myQueue; command_group(myQueue, [&]() { // Data accessors auto a = d_a.get_access<access::read>(); auto b = d_b.get_access<access::read>(); auto c = d_c.get_access<access::read>(); auto r = d_r.get_access<access::write>(); // Kernel parallel_for(count, kernel_functor([ = ](id<> item) { int i = item.get_global(0); r[i] = a[i] + b[i] + c[i]; })); }); } // Test the results int correct = 0; float tmp; for (int i = 0; i < count; i++) { tmp = h_a[i] + h_b[i] + h_c[i]; // assign element i of a+b+c to tmp tmp -= h_r[i]; // compute deviation of expected and output result if (tmp * tmp < TOL * TOL) // correct if square deviation is less than // tolerance squared { correct++; } else { printf(" tmp %f h_a %f h_b %f h_c %f h_r %f \\n", tmp, h_a[i], h_b[i], h_c[i], h_r[i]); } } // summarize results printf("R = A+B+C: %d out of %d results were correct.\\n", correct, count); return (correct == count); } 4 The Gory Details The first thing to write in SYCL is the inclusion of the SYCL headers, providing the templates and class definitions to interact with the runtime library. All SYCL classes and objects are defined in the cl::sycl namespace, following the Khronos standard nomenclature. To simplify the readability of this sample, we are importing that namespace by default (using namespace cl::sycl). Data shared between host and device is defined using the SYCL buffer class [Specification Section 3.3.1]. The class provides different constructors to initialize the data from various sources. In this case, we use a constructor from STL Vectors, which transfers data ownership to the SYCL runtime. As usual in C++ programs, when the buffer gets out of scope, the destructor is called. In this case the ownership is transferred back to the original STL vector. Buffers are not associated to particular queues or context, and they are capable of handling data transparently in multiple devices. Also note that SYCL buffers do not require read/write information, as this is defined on a per-kernel basis via the accessor class. The next thing we need is a queue to enqueue our kernels. In OpenCL we will need to set up all the other related classes on our own; but using SYCL we can use the default constructor of the queue class to automatically target the first OpenCL-enabled device available. We could use other constructors and related classes to choose a particular device or a device that has a certain property (and still the code will be much simpler than using traditional OpenCL), but we will keep it minimal for now. Once we have created the queue object, we can enqueue kernels. Together with the code itself, we need additional information to enqueue and run the kernel, such as the parameters and the dependencies that a certain kernel may have on other kernels. All that information is grouped in command_group object [Specification Section 3.2.6]. In this case we create an anonymous command group object. The constructor receives the queue where we want to run the kernel, and a lambda or functor which contains the kernel and the associated accessors. The accessor class characterizes the access of the kernel to the data it requires, i.e. if it is read, write, read/write, or many other access modes. Accessors are just templated objects that can be created from different types. Since the most common type used for accessors will be the buffer, the buffer class offers a get_access method to construct an accessor from an existing buffer. In this case we create four accessors, one for each buffer. The first three accessors are read-only - the data is only read inside the kernel - whereas the last one is write, since we are writing there the result of the three-way vector addition. This allows the device compiler to generate more efficient code, and the runtime to schedule different command groups as efficiently as possible. We want to run the vector addition in parallel for each element of the three different vectors, so we use the parallel_for statement to execute the kernel a certain number of times. The parallel_for statement is one of the different ways you can launch kernels in SYCL. See [Specification section 3.7] for details on other ways of enqueueing kernels. The first parameter of the parallel_for is the number of work-items to use, in this case we use one work-item per number of elements in the vector. The second parameter is the kernel itself, provided as a kernel_functor instance. kernel_functor is a convenience class that enables creating the kernel instance from different sources, such as legacy OpenCL kernels or, as is the case in this sample, a simple C++11 lambda. The lambda used for parallel_for expects an id parameter, which is the class that represents the current work-item. It features methods to get detailed information from it, such as local or work group info or global work group info. In this case, the contents of the lambda represent what will be executed for each work-item. Note that different execution methods can change this behaviour! In this case the contents of the kernel are pretty much equal to the ones used in classic OpenCL, but we can access local scalar variables from the kernel without adding additional code. Also, we can call host functions and methods from inside the kernel, and we use templates and other fancy features inside. We can also use native OpenCL capabilities, such as vector types, images, functions, and so on, via the cl::openclc namespace. Note that things that are not supported by the device hardware won't work, e.g. we cannot call virtual methods or use host function pointers inside the device. In upcoming entries, we will show you how to overcome these limitations. There is nothing else required! Under the hood, the SYCL runtime will enqueue the kernel and run it for you. The host will wait so that the data can be copied back to the host when the ownership of the buffer is transferred at the end of the scope. 5 Building and Running The process of building and running a SYCL program may vary across implementations. The specification defines two possible ways of implementing it: via a single-source compiler that produces everything, or using two separate compilers, one for the device code and other for the host code. The advantage of this second approach is that the host compiler does not need to support device code, only needs to be able to parse the SYCL C++ library. The device compiler will produce a header file containing the binaries containing kernels that can be integrated with the final program binary. When running the sample program above, the runtime will try to find a GPU device to run the kernel. If there is no GPU device, then it tries to use an OpenCL-capable CPU device. If none of them are available, the runtime will fallback to host mode, where the kernels are executed on the host using a C++ runtime, so if there isn't an OpenCL implementation available, we can still execute our kernels! 6 Conclusions In this blog post we have ported a simple three-way vector add program from OpenCL to SYCL. We have demonstrated that SYCL is simple to use, and facilitates integrating OpenCL into existing C++ code. Future blog posts will demonstrate more advanced features of the programming model, including templates, virtual inheritance, and much more. Fasten your seat belts fellow C++ developers, a new world of possibilities is coming to your nearest machine! Khronos, SPIR, and SYCL are trademarks of the Khronos Group Inc. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos.
https://www.codeplay.com/portal/blogs/2014/05/19/sycl-tutorial-1-the-vector-addition.html
CC-MAIN-2021-04
refinedweb
1,937
58.92
Getting started with webpack - Part 2: Configuration and modules You will need Node 6.11.5+ installed on your machine. In the previous part of the series, we learned the very basics of webpack and how we can use it to bundle JavaScript files. We also learned how to use webpack to watch for file changes and bundle when changes are detected. In this part of the series, we will dig deeper into webpack and see what else is possible. Let’s get started. Source code for the application is available on GitHub. Prerequisites To follow along in this series, you need the following requirements: - Completed all previous parts of the series. - Basic knowledge of JavaScript. - Basic knowledge of the CLI. - A text editor. VS Code is recommended. - Node.js (>=6.11.5) and npm installed locally. Let’s continue with the series. Configuring webpack In the first part of the series, we did not have to configure webpack, we just installed it using npm and started using it. However, webpack requires a configuration file and if it does not find one in your project directory, it will use the one it comes bundled with. The webpack configuration file contains many options and you can use these options to configure webpack to your liking. You can specify the entry points, output points, minification options, and more. To create a webpack configuration file, create a webpack.config.js file in the root of the project. If you still have the project we created in the first part of the series, we will be using that. If you don’t have it, you can download it from the GitHub repository. Now create a new webpack.config.js file in the project root. By default, webpack will look for this file in the root of your application. However, you can use whatever file name you want and instruct webpack on where to find the configuration file using the following command: $ webpack --config "/path/to/webpack.config.js" If you don’t have webpack installed globally, you’ll need to add npxor node_modules/.binbefore the command as stated in the first part of the series. Open the webpack.config.js file and paste the following code: // File: ./webpack.config.js const webpack = require('webpack') module.exports = { // Insert the configuration here } This is the base for the configuration and we will typically add our configuration options to the exports object above. Let’s start by telling webpack our input file and output file: In the exports object, add the following: // File: ./webpack.config.js const webpack = require('webpack') const path = require('path') module.exports = { mode: 'development', entry: path.resolve(__dirname + '/src/index.js'), output: { path: path.resolve(__dirname + '/dist/assets'), filename: 'bundle.js' } } We use __dirnameand path.resolvehere to get the absolute path to the current file. Webpack requires absolute paths when specifying the pathto a file. Above, we have specified the entry point for webpack and also we have specified the output path and filename. This will make sure webpack starts compiling at the src/index.js file and outputs to the specified path and file. We also specified the mode webpack should run in as development. Other valid values for mode are production and none. Now that we have this minor configuration, let’s see if it’ll bundle our application as specified. Open the package.json file and replace the scripts with the following: // File: ./package.json { // [...] "scripts": { "build": "webpack", "watch": "npm run build -- --watch" }, // [...] } Above, we have removed the CLI options that specified the entry, output, and mode for webpack and we left just the webpack command. We can do this because we have configured the entry, output, and mode in the webpack.config.js file. Now let’s update the ./src/index.js file to see if our changes will take effect. Replace the contents of the file with the following: // File: ./src/index.js document.addEventListener('DOMContentLoaded', function () { window.setTimeout(function () { document.getElementsByTagName('h1')[0].innerHTML = 'Hello there sport' }, 1000); }); Now, if you have not already, run the command below to install the dependencies: $ npm install After installation is complete, run the following command to compile the scripts: $ npm run build If all goes well, you should see that there is a new ./dist/assets/bundle.js file in the project as configured in the configuration file. There is a lot more to configure when it comes to webpack, you can read more in the documentation here. Understanding ES6 modules While working with webpack, you will likely be doing a lot of module importing. So let’s see what modules are and how you can use them to make your JavaScript files modular. JavaScript has had modules for a while but it was implemented via libraries. ES6 is the first time it was introduced as a built-in feature. Modules are essentially files that export some functionality that can then be reused in other places in your code. Let’s see an example of what a module is. In this example JavaScript file, let’s define a function that generates random characters: // random.js function randomNumbers(min, max) { min = Math.ceil(min); max = Math.floor(max); return Math.floor(Math.random() * (max - min + 1)) + min; } The function above is simple enough, you give it a min number and max number and it’ll return a random number from the min to the max. Named module exports To make the module export this function so it is available to other files we have to export it by adding the export keyword before the function keyword like this: // random.js export function randomNumbers(min, max) { min = Math.ceil(min); max = Math.floor(max); return Math.floor(Math.random() * (max - min + 1)) + min; } After the function in your module has been exported, you can now import it in other JavaScript files and use the randomNumbers function. For example: // main.js // Imports the function from the module import { randomNumbers } from './random.js'; // Displays a random number between 100 and 10000 console.log(randomNumbers(100, 10000)); Multiple named module exports There are other ways to import and export. Above, we made named exports. Named exports have to be imported with the name that they were exported with. You can have multiple named exports in a single file, for example: // random.js // First named export export function randomNumbers(min, max) { min = Math.ceil(min); max = Math.floor(max); return Math.floor(Math.random() * (max - min + 1)) + min; } // Second named export export function randomString() { function randStr(){ return Math.random().toString(36).substring(2, 15) } return randStr() + randStr(); } Above, we can see that we added a new export randomString to our previous example and now we have two named exports in this module. We can import and use them as shown below: // main.js // Imports the function from the module import { randomNumbers, randomString } from './random.js'; // Displays a random number between 100 and 10000 console.log(randomNumbers(100, 10000)); // Displays a random string console.log(randomString()); As seen above, we imported both the randomNumbers and randomString functions from the module and after that, we used it in the current file. We can also import all available exports in a module in one go like this: // main.js // Imports the function from the module import * as generate from './random.js'; // Displays a random number between 100 and 10000 console.log(generate.randomNumbers(100, 10000)); // Displays a random string console.log(generate.randomString()); Above, we have imported all the available exports by using the * wildcard. We also specified an alias object generate to store all the exports. This alias can be any word you want. Using this method, however, is not encouraged. You should import modules you need one by one when possible. This helps to keep the file size smaller and also makes it so you compile only what you use. Default module exports Generally, it’s always a good idea for your modules to have a single responsibility. In this case, we can have a default export in the module. It will look something like this: // random.js export default function (min, max) { min = Math.ceil(min); max = Math.floor(max); return Math.floor(Math.random() * (max - min + 1)) + min; } As seen above, we have added the default keyword after the export keyword. We also removed the function’s name. Now we can import the module like this: // main.js // Imports the function from the module import generateRandomNumbers from './random.js'; // Displays a random number between 100 and 10000 console.log(generateRandomNumbers(100, 10000)); As seen above, instead of importing any named export, we can define an arbitrary name for the default export when we are importing it. Note that ES6 imports have to be top-level, therefore, you can’t conditionally import a module using an ifstatement. Using ES6 modules in our code Let’s see how we can use modules in our code. Assuming you still have the code from part one, we will use that as the base. Create a new file src/utilities/random.js and paste the following code: // File: ./src/utilities/random.js export default function() { function randStr() { return Math.random() .toString(36) .substring(2, 15) } return randStr() + randStr(); } Next, open the src/index.js file and replace the content with the following code: // File: src/index.js import generateRandomString from './utilities/random'; document.addEventListener('DOMContentLoaded', function () { var randomString = 'Random String: ' + generateRandomString(); window.setTimeout(function () { document.getElementsByTagName('h1')[0].innerHTML = randomString }, 0); }); Now, let’s build the application. Run the command below to compile our code using webpack: $ npm run build When the build is complete, open the dist/index.html and replace the bundle.js script URL with assets/bundle.js: <!--="./assets/bundle.js"></script> </body> </html> Then open the dist/server.js and replace the contents with the following: // File: ./dist/server.js const express = require('express'); const app = express(); const port = 3000; const path = require('path'); app.get('/assets/bundle.js', (req, res) => ( res.sendFile(path.join(__dirname + '/assets/bundle.js')) )); app.get('/', (req, res) => ( res.sendFile(path.join(__dirname + '/index.html')) )); app.listen(port, () => console.log(`Example app listening on port ${port}!`)); Now you can run the following code to launch our Node.js server: $ node dist/server.js Now when you visit on your browser, you should see the application run as seen above. Conclusion In this tutorial of the series, we have learned how to configure webpack and define some defaults. We also learned how modules work in ES6. However, webpack is a lot more powerful than this. We will dive a little deeper in the next part. The source code to this application is available on GitHub. February 6, 2019 by Neo Ighodaro
https://pusher.com/tutorials/webpack-part-2
CC-MAIN-2021-25
refinedweb
1,789
60.11
Bluetooth on Lopy4 doesn't detect any device - AlejandroReyes last edited by Hello everyone, I'm trying to work with bluetooth on lopy4 and first I used this code from from network import Bluetooth bluetooth = Bluetooth() bluetooth.start_scan(-1) # start scanning with no timeout while True: print(bluetooth.get_adv()) Just to see how it works but it doesn't detect any bluetooth, I already tried with many devices: Iphone, Samsung, Huawei and even laptop, however the problem persists, all I get on the terminal is: None None None None None I don't understand why is not working this code from the documentation and I hope someone could know something about it. Hi @AlejandroReyes, I've just tried this code and managed to find my BT headphones with it. The issue could be that your devices don't use BLE, so the LoPy board couldn't find them. Currently, only BLE is supported. You can find more on this on the Bluetooth docs page. Let me know if this helped!
https://forum.pycom.io/topic/3435/bluetooth-on-lopy4-doesn-t-detect-any-device/1
CC-MAIN-2019-30
refinedweb
170
68.2
Recent I needed this system to work for me on a project that was already fully programmed in regular AS3. That meant I had to shoehorn Starling/Feathers into an otherwise normal Flash project, and I’ve come to learn that this isn’t the “normal” way that people come to start using Starling. Unfortunately there was no way to start over with the project as it has been in use for a few years now and we were just putting the final touches on the iPad port. So a lot of my frustration came from getting Starling to play nice with Flash. I would definitely recommend going all-Starling if possible. My HaikuDeck follows. You can also view the slide commentary on the actual Haiku Deck website here: Download the demo files here. Created with Haiku Deck, the free presentation app for iPad What’s Starling? Starling is a Flash framework (library), mainly used for running games in Adobe AIR on mobile devices (iOS and Android). You can download it at. Starling is neat – it runs directly on the GPU using stage3D, so essentially it’s super fast and supports 3D. I haven’t had a change to play around with it outside of this project, but it sounds quite powerful. You’ll need Starling to run Feathers. Ok, what’s Feathers? Feathers is a UI library that runs on Starling. It creates pretty UI elements for mobile devices, like scrollbars that don’t suck! Download it at. Take a minute to check out the components explorer too – it’ll give you a good sample of the kind of stuff you can make with Feathers. Configuration You need to change a few settings in order to set all this up. I don’t know why, but I had a terrible time figuring out the file paths necessary to make all this work. My solution follows (and will work with the demo files). Set publish to AIR (iOS or Android) This one should be obvious – you’re working with a file you intend to be published on a mobile device, so your publish settings need to reflect that. Side note: If you use a version below AIR 3.4, you might run into problems. You need Flash Player 11 for all this to work out of the box and I suspect that older versions of AIR are also using older Flash Players. Set render mode to “direct” If you read the directions on the Starling site, this will come up. Because of the way Starling uses the GPU, you need to set the render mode in the AIR settings to “direct”. If you don’t, you’ll get a big warning when you try to run the SWF. Add Feathers and Starling SWC to Library Path In the Actionscript settings you can add new SWCs to the library path. The one for Starling is in the “bin” folder. The Feathers one is in the “swc” folder. You’ll need both. In the screenshot, you’ll see that Flex/Core frameworks were also added. These are added automatically by Flash Professional the first time you run the SWF (you’ll have to accept a dialog box and then re-publish). Add the theme source files to the Source Path In the source path, also in Actionscript settings, you’ll add the theme source folder. For the demo, it’s VICMobileTheme > source. A note on themes: The theme that comes packaged with Feathers is called MetalWorksMobileTheme, and you might want to start experimenting with this before using my theme or trying to make your own. You will probably have to skin your theme at some point though, so the tutorial I used is here. Note that you need the TexturePacker software, which costs about $30. Setting up the code There are a few parts to this, but it only amounts to about 25 lines to get you started using the FeathersComponents class included in the tutorial. - Import Starling and Feathers classes - Initialize Starling - Get the Starling root - Format your list as an array of objects import starling.core.Starling; import starling.events.Event; import ca.pie.feathers.FeathersComponents; var starling:Starling = new Starling( ca.pie.feathers.FeathersComponents, stage ); starling.start(); starling.addEventListener( starling.events.Event.ROOT_CREATED, init ); var gui:FeathersComponents; var fruitArray:Array; var words:String = “Put a lot of words here!”; function init($evt:starling.events.Event):void { gui = FeathersComponents(Starling.current.root); gui.addEventListener("actionPerformed", onListClick); fruitArray = new Array (//Put a long array of objects here!); //These are custom functions from the FeathersComponents class gui.initMainList(fruitArray, new Rectangle(50, 150, 300, 300)); gui.initScrollText(words, new Rectangle(450, 150, 450, 140)); } function onListClick($evt:starling.events.Event):void { alertText.text = "List item labelled " + gui.currentListItem.description + " was chosen."; } One of the things I was particularly interested in was the List component in Feathers, which creates a scrolling list of items that are selectable, which is why that’s included in the demo. You can read all about it on the Feathers site/API, but basically the List takes an array of objects as its data provider. If you’re not familiar with the syntax, an object literal looks like the following: {description: “Words to go in list”, accessorySource: myTexture} You can have as many properties as you want, but these two are pretty basic. The first one, which I called “description” in the FeathersComponents class, is the actual description that will show up in the list item’s box. It’s a String. The other property, accessorySource, defines a texture (which is the Starling equivalent of a Bitmap). The texture can be applied to the list like an icon. You don’t need this, but I wanted to show how it works in the tutorial files. So, the actual array looks more like this: fruitArray = new Array ( {description: "strawberry", accessorySource: gui.sArrowTexture}, {description: "apple", accessorySource: gui.sArrowTexture}, {description: "grape", accessorySource: gui.sArrowTexture}, {description: "rhubarb", accessorySource: gui.sArrowTexture}, {description: "orange", accessorySource: gui.sArrowTexture}, {description: "pear", accessorySource: gui.sArrowTexture}, {description: "raspberry", accessorySource: gui.sArrowTexture}, {description: "elderberry", accessorySource: gui.sArrowTexture}, {description: "clementine", accessorySource: gui.sArrowTexture}, {description: "guava", accessorySource: gui.sArrowTexture}, {description: "kumquat", accessorySource: gui.sArrowTexture}, {description: "starfruit", accessorySource: gui.sArrowTexture}, {description: "canteloupe", accessorySource: gui.sArrowTexture}, {description: "banana", accessorySource: gui.sArrowTexture}, {description: "watermelon", accessorySource: gui.sArrowTexture}, {description: "passionfruit", accessorySource: gui.sArrowTexture}, {description: "mango", accessorySource: gui.sArrowTexture} ); That’s all you need to get started. The two methods being called, gui.initMainList and gui.initScrollList come from the FeathersComponents custom class. They both take a Rectangle as an argument as a way to determine their size and position on the Stage. (You can’t put Starling display list items in items on the Flash display list, so you can think of everything sitting right on the stage, or technically beneath the stage). But wait, there’s some weird stuff Starling runs on the GPU and as such, its display list is actually completely underneath the flash display list. So if you need things to overlap and you are modifying an existing file, you will need to make everything Starling objects so they overlap properly. This was a bit of a pain in the ass with an existing project, so this is one of the major reasons that you should consider going all the way with Starling if you’re creating a new project. Text will appear tiny when testing in Flash Professional’s AIR debugger. It’s fine on the actual device so don’t panic. Starling has different terminology for some display features. Since technically everything in Starling is created in 3D, a Rectangle becomes a Quad, and Bitmaps are now Textures (stretched over a flat plane in 3D). You’ll get used to it. As mentioned earlier, Starling has its own version of lots of stuff that exists in AS3. These will clash with the Flash namespace so you have to spell things out for Flash. Basically Starling was trying to name things the same as Flash so AS3 programmers would already understand the syntax and terminology. Starling has its own MovieClip, display list, and Event model. When you want to use one, you have to call it a new “starling.display.MovieClip” or new “starling.events.Event”. That’s all fine, but keep in mind if you have a duplicate in Flash, you also have to go back and change all your Flash MovieClips to “flash.display.MovieClip” and events to “flash.events.Event”, otherwise the compiler gets all confused and unhappy. That’s the worst of it though. After my 2 weeks of slogging through figuring out Feathers and making components work, I was able to give my tutorial and files to a few coworkers and we got everything working in another Flash project within a few hours, so there’s hope! Recent Comments
http://axoninteractive.ca/starling-and-feathers-for-flash-cs6-mobile-ui-2/
CC-MAIN-2019-18
refinedweb
1,474
66.54
Forums Java mp3 to buffer Viewing 3 posts - 1 through 3 (of 3 total) I guess this is something for Topher ;-) In everyday’s max world, loading mp3 files is still a pain in the ass because it uses QT to import it into a buffer~, and it takes so looooooooooooooooong time for. As i’m able to read and write mp3 (and ogg too) in my Java externals for max, i wonder what’s the best way to import an mp3 file from an mxj, then poke it into an msp buffer~. My Java buffer seems to have size expressed in frames, and seems to be an Array of Bytes. 1 – Do i have to use a Java MSPBuffer ? In this case, how do i fill it ? 2 – Or do i have to directly poke into an msp buffer~ outside ? How ? Signal outlet, remote send ? thanks in advance f.e In fact, all this would be a problem of converting a Byte array to floats ?! But how do we do this On 13 Apr 2006, at 11:52, f.e wrote: > In fact, all this would be a problem of converting a Byte array to > floats ?! But how do we do this Depending on the underlying representation, you could look at DataInputStream and DataOutputStream. (I use these for marshalling OSC messages – they work fine.) – N. nick rothwell — composition, systems, performance — http:// C74 RSS Feed | © Copyright Cycling '74
http://cycling74.com/forums/topic/mp3-to-buffer/
CC-MAIN-2014-35
refinedweb
238
80.01
lisp(1) [bsd man page] LISP(1) General Commands Manual LISP(1) NAME lisp - lisp interpreter SYNOPSIS lisp DESCRIPTION: atom dptr load putd rplacd bcdp drain null putprop set car eq numberp ratom terpr cdr equal outfile read close eval patom readc concat get pntlen retbrk cons getd portp return cont infile print rplaca Nlambda functions (possibly simulating ones which are normally lambdas): add1 difference onep quotient zerop and exit or reset break go plus setq cond minus product sub1 cond mod prog sum def not quote times The following functions are provided as lisp code (and at the moment must be read in by saying (load 'auxfns): add copy length numbp append defevq linelength pp_etc apply* defprop member reverse charcnt defprop memcar terpri chrct diff memcdr conc last nconc All of the above functions are documented in the ``Harvard Lisp Manual.'' The following functions are provided as in MIT's MACLISP. alphalessp do mapc setsyntax apply explodec mapcar throw ascii exploden prog2 tyi catch funcall progn tyipeek defun implode progv tyo `. AUTHORS Originally written by Jeff Levinsky, Mike Curry, and John Breedlove. Keith Sklower made it work and is maintaining the current version. The garbage collector was implemented by Bill Rowan. SEE ALSO Harvard UNIX Lisp Manual MACLISP Manual UCB Franz Lisp Manual BUGS The status bits for setsyntax are not the same as for MACLISP. Closing down a pipe doesn't always seem to work correctly. Arrays are not implemented in version 1. 3rd Berkeley Distribution LISP(1) lisp(3gv) lisp(3gv) NAME geomview lisp interpreter NOTE This document describes the geomview 1.3 lisp interpreter. This version is incompatible with previous versions in several ways. Since the previous one was used mostly just by Geometry Center staff, I am not going to write a docu- ment detailing the changes. The geomview lisp interpreter is not very well documented in general because I am strongly considering phasing it out and replacing it with a real lisp interpreter in a future version of geomview. If you have any questions about the current version or how to convert programs from an older version please contact me directly [ mbp@geomtech.com ]. SYNOPSIS #include "lisp.h" void LInit(); Lake * LakeDefine(FILE *streamin, FILE *streamout, void *river); void LakeFree(Lake *lake); LObject * LNew(LType *type, LCell *cell); LObject * LRefIncr(LObject *obj); void LRefDecr(LObject *obj); void LWrite(FILE *fp, LObject *obj); void LFree(LObject *obj); LObject * LCopy(LObject *obj); LObject * LSexpr(Lake *lake); LObject * LEval(LObject *obj); LObject * LEvalSexpr(Lake *lake); LList * LListNew(); LList * LListAppend(LList *list, LObject *obj); void LListFree(LList *list); LList * LListCopy(LList *list); LObject * LListEntry(LList *list, int n); int LListLength(LList *list); int LParseArgs(char *name, Lake *lake, LList *args, ...); int LDefun(char *name, LObjectFunc func, char *help); void LListWrite(FILE *fp, LList *list); LInterest * LInterestList(char *funcname); LObject * LEvalFunc(char *name, ...); int LArgClassValid(LType *type); void LHelpDef(char *key, char *message); LDEFINE(name, ltype, doc) LDECLARE((name, LBEGIN, ..., LEND)); Geometry Center Oct 22 1992 1 lisp(3) lisp(3) DESCRIPTION Geomview contains a minimal lisp interpreter for parsing and evaluating commands. This lisp interpreter is part of the "-loogutil" library and thus any program which links with this library may use the interpreter. This provides a simple but powerful way to build up a command language. This manual page assumes that you are familiar with the syntax of lisp. The first part describes the basics of using the interpreter. Some gory details that don't con- cern most users then follow. The main steps in using the lisp interpreter are 1. call Linit() to initialize the interpreter 2. make calls to LDefun(), one for each lisp function you want the interpreter to know about 3. define the "i/o lake" 4. parse input with calls to LSexpr() and evaluate the resulting lisp objects with LEval() (or use LEvalSexpr() to combine both steps). For example the following code defines a single function "f" and executes commands from standard input: #include "lisp.h" Lake *lake; LObject *obj, *val; LInit(); LDefun("f", f, NULL); lake = LakeDefine(stdin, stdout, NULL); while (!fe- of(stdin)) { obj = LSexpr(lake); val = LEval(obj); LFree(obj); LFree(val); } The second argument to LDefun() is a function pointer; LDefun() sets up a correspondence between the string "f" and the function f, which is assumed to have been previ- ously declared. The section FUNCTION DEFINITIONS below gives the expected call syntax and behavior of such func- tions. (The third argument to LDefun() is a pointer to a string which documents the function and may be NULL if you don't care about documentation.) LakeDefine() defines an i/o lake; this is a generalization of the notion of an i/o stream. Most programs don't need to use the generaliza- tion, though, and can simply pass FILE pointers as LakeDe- fine()'s first two arguments and NULL as the third one. The section LAKES below gives the details for those who are interested. LSexpr() [which stands for Lisp Symbolic EXPRession] parses a single lisp expression from the lake, returning a lisp object which represents that expression. Geometry Center Oct 22 1992 2 lisp(3) lisp(3) The lisp object is returned as an LObject pointer, which points to an opaque structure containing a representation of the expression. LEval() then evaluates the object; it is during the call to LEval() that the action of the expression takes place. Note that the last two lines of code in this example could have been replaced by the sin- gle line LEval(LSexpr(lake)) or, more efficiently, by LEv- alSexpr(lake). FUNCTION DEFINITIONS The functions defined by calls to LDefun() are expected to have a certain call syntax; LEval() calls them when it encounters a call to the lisp function named with the cor- responding string. The macro LDEFINE is provided for declaring them. For example: LDEFINE(f, LSTRING, "(f a b) returns a string repre- senting the0um of the integer a with the floating point number b.") { int a; float b; char buf[20], *s; LDECLARE(("f", LBEGIN, LINT, &a, LFLOAT, &b, LEND)); sprintf(buf,"%f",a+b); s = strdup(buf); return LNew(LSTRING, &s); } The important things about this function are: 1. It is declared with the LDEFINE macro, the general syntax of which is LDEFINE(name, type, helpstr). name should be a valid C identifer and will be used to construct the actual name of the C function by prepending an 'L' and the name of the help string by prepending an 'H'. type should be a lisp object type identifier (see below) and determines the type of object that the function returns. helpstr is a documentation string. 2. The use of the LDECLARE macro. More about this be- low. 3. It returns an LObject *. All lisp functions must actually return a value. If you don't care what value they return you can return one of the pre-de- fined values Lnil or Lt (and specify LVOID as the type in the LDEFINE header). This particular example is a function which takes two arguments, an int and a float, and returns a string object representing their sum. A lisp call to this function might look like "(f 1 3.4)". Geometry Center Oct 22 1992 3 lisp(3) lisp(3) The LDECLARE macro, defined in lisp.h, sets up the corre- spondence between variables in the C code and arguments in the lisp call to the function. Note that the arguments to LDECLARE are delimited by *two* pairs of parentheses (this is because C does not allow macros with a variable number of arguments; LDECLARE thus actually takes one argument which is a parenthesized list of an arbitrary number of items). The general usage of LDECLARE is LDECLARE(( name, LBEGIN, <argspec>, ..., LEND )); where name is the name of the function (as specified to LDefun()). <argspec> is an argument specification, which in general consists of a lisp type identifier followed by an address. The identifier indicates the data type of the argument. The builtin type identifiers are LINT (inte- ger), LFLOAT (float), LSTRING (string), LLOBJECT (lisp object), and LLIST (lisp list). Applications may define additional types whose identifiers may also be used here; see the section CUSTOM LISP TYPES below for details. There may be any number of <argspec>'s; the last must be followed by the special keyword LEND. STOP HERE Most users of the lisp interpreter can stop reading this man page here. What follows is only used in advanced sit- uations. EVALUATION OF FUNCTION ARGUMENTS Normally the lisp interpreter evaluates function arguments before passing them to the function; to prevent this eval- uation from happening you can insert the special token LHOLD in an LDECLARE argument specification before the type keyword. For example LHOLD, LLIST, &list, specifies an unevalutated list argument. This feature is really useful only for LLIST, LLOBJECT, and array types (see below) since the other types evalutate to themselves. ARRAYS In general an <argspec> in the LDECLARE call consists of a keyword followed by the address of a scalar data type. Since it is relatively common to use a lisp list to repre- sent an array of values, however, the special <argspec> keyword LARRAY is provided for dealing with them. It has a different syntax: it should be followed by a lisp type Geometry Center Oct 22 1992 4 lisp(3) lisp(3) identifier which specifies the type of the elements of the array and then by two addresses --- the address of an array and the address of an integer count. Upon entry to LDECLARE the count specifies how many elements may be written into the array. LDECLARE then modifies this num- ber to indicate the number of entries actually parsed. For example: LDEFINE(myfunc, ...) { float f[2]; int fn = 2; LDECLARE(("myfunc", LEBGIN LHOLD, LARRAY, f, &fn, LEND)); /* at this point the value of fn has been modified to be the number of entries actually appearing in the list argument; and this number of values have been written into the array f. */ ... } defines a function "myfunc" which takes a list of up to 2 floats as its only argument. Valid calls to "myfunc" would be "(myfunc ())", "(myfunc(7))", and "(myfunc (7 8))". Note the use of LHOLD; this is necessary because otherwise the lisp system would attempt to evaluate the list as a function call before passing it off to myfunc. OPTIONAL ARGUMENTS Normally the lisp interpreter will generate (to stderr) a reasonable error message if a function is called with fewer arguments than were specified in LDECLARE. Some functions, however, may have some arguments that are optional. You can define a function which takes optional arguments by putting the keyword LOPTIONAL after the last required argument in the LDECLARE call. Any arguments specified in the list after that are considered optional; the interpreter doesn't complain if they are not supplied. Note that all optional arguments must come after all required arguments. Normally excess arguments also elicit an error message. The LREST keyword allows control over this situation. If LREST is followed by a pointer to an LList * variable, then trailing arguments are parsed, evaluated (unless LHOLD was used), and the list of them is stored in the given variable. (Note that the value is an LList, not an LObject of type LLIST -- if there are no excess arguments, the value is NULL, not an empty LLIST.) If LREST is fol- lowed by a NULL pointer, excess arguments are silently Geometry Center Oct 22 1992 5 lisp(3) lisp(3) ignored. LREST might be useful when a function's argument types are not known. It's not necessary to specify LEND after LREST. LISP OBJECTS The basic data type of the lisp interpreter is the lisp object; it is represented by an LObject pointer, which points to an opaque data structure. The functions for manipulating lisp objects (i.e. the object's methods) are: LNew(): creates a new lisp object of the given type with the given value. The "type" argument is one of the values LSTRING or LLIST, or a type pointer defining a custom object type (see CUSTOM OBJECT TYPES below). LRefIncr(): increments the reference count of a lisp object. The lisp interpreter uses the convention that when a proce- dure returns a lisp object, the caller owns the object and thus has responsibility for freeing it. LRefIncr() can be used to increment the ref- erence count of an existing object about to be re- turned. New objects created by LNew() have their reference count initialized to 1 and hence do not need to be LRefIncr()'ed. LRefDecr(): decrements the reference count of a lisp object. This should probably not be called by application programs; it is used internally. LWrite(): writes a formatted string representation of a lisp object to a stream. LFree(): free the space assoicated with a lisp object LCopy(): construct a copy of a lisp object CUSTOM OBJECT TYPES In addition to the predefined lisp object types you may define your own custom types. This is done by construct- ing a structure containing various function pointers for manipulating objects of your new type. The address of this structure is then the type identifier for this type and may be used in LDECLARE <argspec>'s and in LNew() calls. (The type names LINT, LSTRING and the other builtin types are actually pointers to predefined struc- tures.) The structure is of type LType as defined in lisp.h: struct LType { /* name of type */ char *name; /* size of corresponding C type */ int size; Geometry Center Oct 22 1992 6 lisp(3) lisp(3) /* extract cell value from obj */ int (*fro- mobj)(/* LObject *obj, void *x */); /* create a new LObject of this type */ LObject *(*toobj)(/* void *x */); /* free a cell of this type */ void (*free)(/* void *x */); /* write a cell value to a stream */ void (*write)(/* FILE *fp, void *x */); /* test equality of two cells of this type */ int (*match)(/* void *a, void *b */); /* pull a cell value from a va_list */ void (*pull)(/* va_list *a_list, void *x */); /* parse an object of this type */ LObject *(*parse)(/* Lake *lake */); /* magic number; always set to LTypeMagic */ int magic; }; The void * pointers in the above point to objects of the type you are defining. For examples of how to define new types see the code in lisp.c that defines the string and list types. See also the file TYPES.DOC in the lisp source code directory for further details. LISTS The LList pointer is used to refer to objects of type LLIST, which implement a linked list. The operations on these objects are LListNew(), LListLength(), LListEntry(), LListAppend(), LListCopy(), and LListFree(). These are mostly used internally by the lisp system but are avail- able for outside use. Maybe I'll write more documentation for them later if it seems necessary. LAKES The Lake structure is a generalization of an input stream. It contains three members: an input FILE pointer ("streamin"), an output FILE pointer ("streamout"), and an arbitrary pointer ("river"). The input FILE pointer is required; the lisp interpreter assumes that every lake has a valid input file pointer. The output FILE pointer is required if you do any operations that result in the intepreter producing any output. The third pointer may point to whatever you want. The lisp interpreter itself does not directly refer to this pointer. It may be used by the parser that you supply when defining a new lisp Geometry Center Oct 22 1992 7 lisp(3) lisp(3) object type. The term "Lake" is supposed to connote something more gen- eral than a stream; it also seemed particularly appropri- ate since this interpreter was written in the City of Lakes. HIDDEN LAKE ARGUMENTS AND OTHER WET THINGS This section is X rated. Don't read it unless you are really serious. The lisp interpreter works by first parsing (LSexpr()) an expression then evaluating it (LEval()). The LDECLARE macro is a mechanism which allows both the syntax (for parsing) and the semantics (for evaluation) of an expres- sion to be specified in the same convenient place --- at the top of the C function which implements the function. The call syntax of all such C functions is LObject *func(Lake *lake, LList *args) When parsing a call to the corresponding lisp function, LSexpr() calls func with that lake pointer, and with args pointing to the head of the list in the parse tree corre- sponding to this function call. LDECLARE parses the argu- ments in the call (by reading them from the lake) and appends them to this list. (Note: the head of this list is the function itself, so the first argument becomes entry #2 in the list.) When evaluating the function call, LEval() calls func with lake=NULL and with args pointing to the call's argument list. (In this case the first entry of the list is the first argument.) LDECLARE then converts the arguments in the list into the appropriate C data types, writing their values into the addresses in the <argspec>s. One side-effect of using lake=NULL as the signal to evalu- ate rather than to parse is that the value of the lake pointer is not available at evaluation time. Some func- tions, however, may want to do something with the lake they were parsed from. For example, the "write" function in geomview writes data to the output stream associated with its input stream. (In geomview these streams are all stored in a general "Pool" structure which is retained as the "river" member of the lake.) The special token LLAKE may be used to cause the lake pointer to be saved in the args list at parse time and written into a variable at evaluation time. It is used exactly like the other Geometry Center Oct 22 1992 8 lisp(3) lisp(3) (scalar) argument keywords: LObject *func(Lake *lake, LList *args) Lake *mylake; LDECLARE(("myfunc", LBEGIN LARG_LAKE, &mylake, ... LARG_END)); At evaluation time LDECLARE will set mylake to have the value that lake had at parse time. This looks just like a specification for an argument to the lisp function but it is not --- it is just a way to tell LDECLARE to remember the lake pointer between parse- and evaluation-time. BUGS The documentation is incomplete. AUTHOR The lisp interpreter was written mostly by Mark Phillips with lots of input and moral support from Stuart Levy and Tamara Munzner. Geometry Center Oct 22 1992 9
https://www.unix.com/man-page/bsd/1/LISP/
CC-MAIN-2022-40
refinedweb
3,074
59.64
A Machine Learning Model inside the Container Introduction In this article, I am going to show you how to create a machine learning model inside the docker. Steps: - pull the centOS image from DockerHub - Install the Python on the container - Create the ML model inside Docker Firstly, we have to check that the docker is installed or not so we will use this command. docker --version To use docker we have to start docker services and I have used this command systemctl start docker Now, we have to pull the centos image from DockerHub by using this command. docker pull centos To check if our os is download or not we use “docker images” command and to run the docker container we have used this command docker run -it --name centos Now we are inside our container and we have to install some software for running the ml model. first, install the python by this command yum install python3 Now we have to install the NumPy library pip3 install numpy For loading the dataset we need pandas library so using pip3 we have to install pandas library pip3 install pandas We have to install scikit-learn library to use linear regression . pip3 install scikit-learn We have installed all the library which is important to run our program - I have used WinSCP to transfer my data from window to RedHat Now my dataset “Salary_Data.csv” is there in RedHat.And we can check by ls command Now we have to send this file for our base os to docker by using this command. This command we have to write in our base os terminal. docker cp Salary_Data.csv mlmod:/root/project Now Salary_data.csv is copied in the container and now we have to write our Machine learning code and also we have to train our model. Now let's create a test.py “vi test.py” import pandas as pd import numpy as np dataframe= pd.read_csv('Salary_Data.csv') x = dataframe['YearsExperience'].values.reshape(30,1) y = dataframe['Salary'] from sklearn.linear_model import LinearRegressionmodel = LinearRegression() model.fit(x,y)import joblib joblib.dump(model,'Salary_model.pkl') python3 test.py command will train the model vi ml_model.py import joblib model=joblib.load('Salary_model.pkl') num=float(input("years of experience:")) predict=model.predict([[num]])print(predict)
https://aaryan126.medium.com/a-machine-learning-model-inside-the-container-67bf1b3285a
CC-MAIN-2021-49
refinedweb
387
56.25
2013-11-09 23:11:49 8 Comments I have read articles about the differences between SOAP and REST as a web service communication protocol, but I think that the biggest advantages for REST over SOAP are: REST is more dynamic, no need to create and update UDDI(Universal Description, Discovery, and Integration). REST is not restricted to only XML format. RESTful web services can send plain text/JSON/XML. But SOAP is more standardized (E.g.: security). So, am I correct in these points? Related Questions Sponsored Content 17 Answered Questions [SOLVED] What is the maximum length of a URL in different browsers? 23 Answered Questions [SOLVED] How do I POST JSON data with Curl from a terminal/commandline to Test Spring REST? - 2011-08-24 08:51:11 - kamaci - 2638237 View - 2609 Score - 23 Answer - Tags: json rest spring-mvc curl http-headers 34 Answered Questions [SOLVED] PUT vs. POST in REST 9 Answered Questions [SOLVED] How to pass "Null" (a real surname!) to a SOAP web service in ActionScript 3 - 2010-12-16 00:42:14 - bill - 915387 View - 4605 Score - 9 Answer - Tags: apache-flex actionscript soap coldfusion wsdl 31 Answered Questions 32 Answered Questions [SOLVED] What exactly is RESTful programming? - 2009-03-22 14:45:39 - hasen - 1636210 View - 3921 Score - 32 Answer - Tags: http rest definition @Jose Manuel Gomez Alvarez 2018-05-23 15:41:13 Among many others already covered in the many answers, I would highlight that SOAP enables to define a contract, the WSDL, which define the operations supported, complex types, etc. SOAP is oriented to operations, but REST is oriented at resources. Personally I would select SOAP for complex interfaces between internal enterprise applications, and REST for public, simpler, stateless interfaces with the outside world. @Premraj 2015-12-08 23:38:04 REST(REpresentational State Transfer) REpresentational State of an Object is Transferred is REST i.e. we don't send Object, we send state of Object. REST is an architectural style. It doesn’t define so many standards like SOAP. REST is for exposing Public APIs(i.e. Facebook API, Google Maps API) over the internet to handle CRUD operations on data. REST is focused on accessing named resources through a single consistent interface. SOAP(Simple Object Access Protocol) SOAP brings its own protocol and focuses on exposing pieces of application logic (not data) as services. SOAP exposes operations. SOAP is focused on accessing named operations, each operation implement some business logic. Though SOAP is commonly referred to as web services this is misnomer. SOAP has a very little if anything to do with the Web. REST provides true Web services based on URIs and HTTP. Why REST? application/xmlor application/jsonfor POST and /user/1234.jsonor GET /user/1234.xmlfor GET. Why SOAP? source1 source2 @Santiago Martí Olbrich 2016-02-27 20:30:57 REST verbs/methods don't have a 1 to 1 relation to CRUD methods although, it can help in the beginning to understand the REST style. @Mou 2016-11-07 14:33:43 REST does not support SSL ? the uniform resource url for rest can not be start with https:// ? @blue_note 2018-08-15 18:19:17 There are already technical answers, so I'll try to provide some intuition. Let's say you want to call a function in a remote computer, implemented in some other programming language (this is often called remote procedure call/RPC). Assume that function can be found at a specific URL, provided by the person who wrote it. You have to (somehow) send it a message, and get some response. So, there are two main questions to consider. For the first question, the official definition is WSDL. This is an XML file which describes, in detailed and strict format, what are the parameters, what are their types, names, default values, the name of the function to be called, etc. An example WSDL here shows that the file is human-readable (but not easily). For the second question, there are various answers. However, the only one used in practice is SOAP. Its main idea is: wrap the previous XML (the actual message) into yet another XML (containing encoding info and other helpful stuff), and send it over HTTP. The POST method of the HTTP is used to send the message, since there is always a body. The main idea of this whole approach is that you map a URL to a function, that is, to an action. So, if you have a list of customers in some server, and you want to view/update/delete one, you must have 3 URLS: myapp/read-customerand in the body of the message, pass the id of the customer to be read. myapp/update-customerand in the body, pass the id of the customer, as well as the new data myapp/delete-customerand the id in the body The REST approach sees things differently. A URL should not represent an action, but a thing (called resource in the REST lingo). Since the HTTP protocol (which we are already using) supports verbs, use those verbs to specify what actions to perform on the thing. So, with the REST approach, customer number 12 would be found on URL myapp/customers/12. To view the customer data, you hit the URL with a GET request. To delete it, the same URL, with a DELETE verb. To update it, again, the same URL with a POST verb, and the new content in the request body. For more details about the requirements that a service has to fulfil to be considered truly RESTful, see the Richardson maturity model. The article gives examples, and, more importantly, explains why a (so-called) SOAP service, is a level-0 REST service (although, level-0 means low compliance to this model, it's not offensive, and it is still useful in many cases). @Ashish Kamble 2019-09-17 10:41:56 What do you mean RESTis not web service?? Whats JAX-RSthen?? @blue_note 2019-09-17 10:46:25 @AshishKamble: I provided the link of the rest services specification. The official definition contains only the WS-* protocols (roughly the ones we call "SOAP") and rest is not part of it officially @blue_note 2019-09-17 10:47:01 @AshishKamble: Also, note that there's also a JAX-WS, which means "web services", differentiated from "rest services". Anyway, the distinction is not important for any practical purposes, as I also noted. @Bacteria 2015-06-14 19:48:27 SOAP (Simple Object Access Protocol) and REST (Representation State Transfer) both are beautiful in their way. So I am not comparing them. Instead, I am trying to depict the picture, when I preferred to use REST and when SOAP. What is payload? Now, for example, I have to send a Telegram and we all know that the cost of the telegram will depend on some words. So tell me among below mentioned these two messages, which one is cheaper to send? or I know your answer will be the second one although both representing the same message second one is cheaper regarding cost. So I am trying to say that, sending data over the network in JSON format is cheaper than sending it in XML format regarding payload. Here is the first benefit or advantages of REST over SOAP. SOAP only support XML, but REST supports different format like text, JSON, XML, etc. And we already know, if we use Json then definitely we will be in better place regarding payload. Now, SOAP supports the only XML, but it also has its advantages. Really! How? SOAP relies on XML in three ways Envelope – that defines what is in the message and how to process it. A set of encoding rules for data types, and finally the layout of the procedure calls and responses gathered. This envelope is sent via a transport (HTTP/HTTPS), and an RPC (Remote Procedure Call) is executed, and the envelope is returned with information in an XML formatted document. The important point is that one of the advantages of SOAP is the use of the “generic” transport but REST uses HTTP/HTTPS. SOAP can use almost any transport to send the request but REST cannot. So here we got an advantage of using SOAP. As I already mentioned in above paragraph “REST uses HTTP/HTTPS”, so go a bit deeper on these words. When we are talking about REST over HTTP, all security measures applied HTTP are inherited, and this is known as transport level security and it secures messages only while it is inside the wire but once you delivered it on the other side you don’t know how many stages it will have to go through before reaching the real point where the data will be processed. And of course, all those stages could use something different than HTTP.So Rest is not safer completely, right?. Apart from that, as REST is limited by it's HTTP protocol so it’s transaction support is neither ACID compliant nor can provide two-phase commit across distributed transnational resources. But SOAP has comprehensive support for both ACID based transaction management for short-lived transactions and compensation based transaction management for long-running transactions. It also supports two-phase commit across distributed resources. I am not drawing any conclusion, but I will prefer SOAP-based web service while security, transaction, etc. are the main concerns. Here is the "The Java EE 6 Tutorial" where they have said A RESTful design may be appropriate when the following conditions are met. Have a look. Hope you enjoyed reading my answer. @Bhargav Nanekalva 2015-08-23 03:35:35 Great answer but remember REST can use any transport protocol. For example, it can use FTP. @Osama Aftab 2015-09-07 12:45:34 Who said REST can't use SSL? @Bacteria 2015-09-07 16:01:23 @ Osama Aftab REST supports SSL, but SOAP supports SSL just like REST additionally it also supports WS-Security. @GaTechThomas 2016-11-08 18:59:01 To reference the point about size of XML data, when compression is enabled, XML is quite small. @ThomasRS 2017-04-20 21:51:41 The point about the size of the payload should be deleted, it is such a one-dimensional comparison between JSON and XML and is only possible to detect in seriously optimized setups, which are far between. @Phil Sturgeon 2018-01-05 00:17:44 A lot of these answers entirely forgot to mention hypermedia controls (HATEOAS) which is completely fundamental to REST. A few others touched on it, but didn't really explain it so well. This article should explain the difference between the concepts, without getting into the weeds on specific SOAP features. @cmd 2013-11-09 23:19:50 RESTvs SOAPis not the right question to ask. REST, unlike SOAPis not a protocol. RESTis an architectural style and a design for network-based software architectures. RESTconcepts, DELETE. @Abdulaziz's question does illuminate the fact that RESTand HTTPare often used in tandem. This is primarily due to the simplicity of HTTP and its very natural mapping to RESTful principles. Fundamental REST Principles Client-Server Communication Client-server architectures have a very distinct separation of concerns. All applications built in the RESTful style must also be client-server in principle. Stateless Each client request to the server requires that its state be fully represented. The server must be able to completely understand the client request without using any server context or server session state. It follows that all state must be kept on the client. Cacheable. See this blog post on REST Design Principles for more details on REST and the above stated bullets. EDIT: update content based on comments @Pedro Werneck 2013-11-10 00:51:41 REST does not have a predefined set of operations that are CRUD operations. Mapping HTTP methods to CRUD operations blindly is one of the most common misconceptions around REST. The HTTP methods have very well defined behaviors that have nothing to do with CRUD, and REST isn't coupled to HTTP. You can have a REST API over ftp with nothing but RETR and STOR, for instance. @Pedro Werneck 2013-11-10 00:53:23 Also, what do you mean by 'REST services are idempotent'? As far as I know, you have some HTTP methods that by default are idempotent, and if a particular operation in your service needs idempotence, you should use them, but it doesn't make sense to say the service is idempotent. The service may have resources with actions that may be effected in an idempotent or non-idempotent fashion. @Bruce_Wayne 2015-04-16 18:25:43 @cmd :please remove fourth point - "A RESTful architecture may use HTTP or SOAP as the underlying communication protocol". its a misinformation you are conveying. @Pedro Werneck 2013-11-10 00:45:24 Unfortunately, there are a lot of misinformation and misconceptions around REST. Not only your question and the answer by @cmd reflect those, but most of the questions and answers related to the subject on Stack Overflow.. Pushing things a little and trying to establish a comparison, the main difference between SOAP and REST is the degree of coupling between client and server implementations.. A client is supposed to enter a REST service with zero knowledge of the API, except for the entry point and the media type. In SOAP, the client needs previous knowledge on everything it will be using, or it won't even begin the interaction. Additionally, a REST client can be extended by code-on-demand supplied by the server itself, the classical example being JavaScript code used to drive the interaction with another service on the client-side. I think these are the crucial points to understand what REST is about, and how it differs from SOAP: REST is protocol independent. It's not coupled to HTTP. Pretty much like you can follow an ftp link on a website, a REST application can use any protocol for which there is a standardized URI scheme. REST is not a mapping of CRUD to HTTP methods. Read this answer for a detailed explanation on that. REST is as standardized as the parts you're using. Security and authentication in HTTP are standardized, so that's what you use when doing REST over HTTP. REST is not REST without hypermedia and HATEOAS. This means that a client only knows the entry point URI and the resources are supposed to return links the client should follow. Those fancy documentation generators that give URI patterns for everything you can do in a REST API miss the point completely. They are not only documenting something that's supposed to be following the standard, but when you do that, you're coupling the client to one particular moment in the evolution of the API, and any changes on the API have to be documented and applied, or it will break. REST is the architectural style of the web itself. When you enter Stack Overflow, you know what a User, a Question and an Answer are, you know the media types, and the website provides you with the links to them. A REST API has to do the same. If we designed the web the way people think REST should be done, instead of having a home page with links to Questions and Answers, we'd have a static documentation explaining that in order to view a question, you have to take the URI stackoverflow.com/questions/<id>, replace id with the Question.id and paste that on your browser. That's nonsense, but that's what many people think REST is. This last point can't be emphasized enough. If your clients are building URIs from templates in documentation and not getting links in the resource representations, that's not REST. Roy Fielding, the author of REST, made it clear on this blog post: REST APIs must be hypertext-driven. With the above in mind, you'll realize that while REST might not be restricted to XML, to do it correctly with any other format you'll have to design and standardize some format for your links. Hyperlinks are standard in XML, but not in JSON. There are draft standards for JSON, like HAL. Finally, REST isn't for everyone, and a proof of that is how most people solve their problems very well with the HTTP APIs they mistakenly called REST and never venture beyond that. REST is hard to do sometimes, especially in the beginning, but it pays over time with easier evolution on the server side, and client's resilience to changes. If you need something done quickly and easily, don't bother about getting REST right. It's probably not what you're looking for. If you need something that will have to stay online for years or even decades, then REST is for you. @Falco 2014-06-02 11:23:52 Really nice answer :D But I have one question regarding your comparision to the SO-Homepage. How would you implement a Search-Feature in REST? On a homepage you have a search field and the search-word is usually templated into the GET-Part of the URL, or submitted via POST - which is actually templating a user generated string into an URL ? @Pedro Werneck 2014-06-02 14:40:35 Either one is fine. The issue is how the users get the URLs, not how they use them. They should get the search url from a link in some other document, not from documentation. The documentation may explain how to use the search resource. @Falco 2014-06-02 15:02:43 So a link with a placeholder in place of the searchterm is fine? Because the searchterm is an input from the user? @Bhavesh Agarwal 2014-12-04 04:05:16 "people tend to call REST any HTTP API that isn't SOAP". Can you please elaborate this point by giving an example of an API over HTTP, which is not SOAP and not REST either? @Pedro Werneck 2014-12-04 15:27:48 @BhaveshAgarwal almost every so-called "REST" API you can find around the internet is an example. The StackExchange API itself is an example. @Pedro Werneck 2015-01-21 16:41:13 @CristiPotlog I never said SOAP is dependent on any particular protocol, I merely emphasize how REST isn't. The second link you sent says REST requires HTTP, which is wrong. @Orestis 2016-08-11 16:14:30 Lets repeat that once more: HATEOAS is a constraint if you wanna call your API Restful! @Shadrack B. Orina 2016-08-28 06:14:48 Say I have a soap client and a REST server, can the SOAP client post to the REST server. @Oleg Sapishchuk 2017-02-27 20:39:46 @PedroWerneck I've read your linked response for "REST is not mapping CRUD to HTTP methods." But I didn't found there any explanations why REST is not mapping CRUD to HTTP methods, only that person who created question, not properly used HTTP method, for his activity. Can you share more information on this topic, please. @Pedro Werneck 2017-02-27 21:37:48 @OlegSapishchuk HTTP methods have specific semantics, very distinct from CRUD operations. @Sachin Kainth 2017-03-14 12:20:50 Pedro, I had the exact same query as @OlegSapishchuk. As far as I understand CRUD methods do map quite nicely to HTTP methods. Can you elaborate on this please? @Pedro Werneck 2017-03-14 20:43:43 @SachinKainth There's an answer for that here. You can map CRUD ops to HTTP methods, but that's not REST, because it's not the intended semantics of those methods as documented in the RFCs. @Oleg Sapishchuk 2017-03-15 13:18:21 @PedroWerneck the biggest joke here is the fact Google, Twitter and other top companies calling some of their services REST API, while they are not REST, as HATEOS principle was not followed up O_o. @Hoàng Đăng 2017-05-31 04:25:08 @PedroWerneck as you said "stackoverflow.com/questions/<id>" replace id with the Question.idm that is not REST because it return the whole site, so RESTful only return data in format (json, xml..) @Pedro Werneck 2017-05-31 23:06:48 @HoàngĐăng Not at all. There's no REST constraint for that. You should return whatever format the client asked in the Acceptheader, or 406 Not Acceptable. @Rajan Chauhan 2017-10-28 17:55:57 Last 4 lines are gem and should be fully understood by the person in development. Doing pure rest is time consuming but gives rewards in longer run. So better for medium sized or big sized projects. Not good for prototyping and small projects. @aod 2017-11-14 07:23:35 What if there is lots of links the response should return so that HATEOAS is satisfied? Is it acceptable having a big response for a small demand? For example, in an online shop, I would like to fetch a good's thumbnail information. However, with this request, details, add to cart, display comments, add comment, similar goods, etc links come also. While I am aware of all these links after first interaction, is it necessary for the server to send them for each request? @Pedro Werneck 2017-11-14 17:12:13 @aod The goal of REST isn't efficient communication, on the contrary. It trades efficiency for long-term compatibility and evolvability. That's why caching is an important part of REST. If you're serious about HATEOAS, you need to invest some time in setting up cache control headers and orienting clients to use it. @Rex 2017-03-21 12:47:36 Difference between Rest and Soap SOAP REST For more Details please see here @Drazen Bjelovuk 2019-02-04 20:03:38 Do 3 and 6 under REST not contradict? @Rex 2019-02-13 04:04:30 We just compare the feature of each other. @Quan Nguyen 2016-09-20 08:02:14 Addition for: ++ A mistake that’s often made when approaching REST is to think of it as “web services with URLs”—to think of REST as another remote procedure call (RPC) mechanism, like SOAP, but invoked through plain HTTP URLs and without SOAP’s hefty XML namespaces. ++ On the contrary, REST has little to do with RPC. Whereas RPC is service oriented and focused on actions and verbs, REST is resource oriented, emphasizing the things and nouns that comprise an application. @marvelTracker 2016-01-17 00:17:05 IMHO you can't compare SOAP and REST where those are two different things. SOAP is a protocol and REST is a software architectural pattern. There is a lot of misconception in the internet for SOAP vs REST. SOAP defines XML based message format that web service-enabled applications use to communicate each other over the internet. In order to do that the applications need prior knowledge of the message contract, datatypes, etc.. REST represents the state(as resources) of a server from an URL.It is stateless and clients should not have prior knowledge to interact with server beyond the understanding of hypermedia.
https://tutel.me/c/programming/questions/19884295/soap+vs+rest+differences
CC-MAIN-2019-51
refinedweb
3,880
62.68
User Tag List Results 1 to 4 of 4 how do you connect to mysql database Hey guys, just curious if anyone knows a good reference site, talks about how to connect to a mysql database, also if I do connect to a mysql database after switching over to access, would just the connection be different? thanks. - Find Free ASP.NET hosting information and resources at. - Join Date - Jul 2004 - Location - Cape Town, South Africa - 2,880 - Mentioned - 48 Post(s) - Tagged - 0 Thread(s) Google can be your friend here. No, not just the connection will change. They query syntax with also differ slightly depending on the usage. - Join Date - Apr 2009 - 27 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) There are 2 methods to realize ASP.NET connect to Mysql Database. 1) use mysql support default "provider()". HTML Code: using MySql.Data.MySqlClient; ........ MySqlConnection myConnection=new MySqlConnection("server=localhost;user id=root;password=XXXX;database=baby"); string sql="select * from admin"; MySqlDataAdapter myda=new MySqlDataAdapter(sql,myConnection); DataSet mydataset=new DataSet(); myda.Fill(mydataset,"admin"); mydatagrid.DataSource=mydataset; mydatagrid.DataBind();Tibet - information about the Tibet. Plan aTibet Travel Packages by train in July 2010! Bookmarks
http://www.sitepoint.com/forums/showthread.php?614809-Concerning-the-future-of-LINQ-to-SQL&goto=nextnewest
CC-MAIN-2016-50
refinedweb
198
58.38
In many programs to do commands you hit cntrl-a or somthing to save in consle mod how do you do this i have no clue at all im thinking its done with getch() some how but im not sure Printable View In many programs to do commands you hit cntrl-a or somthing to save in consle mod how do you do this i have no clue at all im thinking its done with getch() some how but im not sure Control-keypress "commands" are a way of sending a signal to a running process. A signal is a type of inter-process comunication that alerts the program to an exceptional condition and tells it to do something (usually quit now, save and quit, or set some flag). I'm not sure how to handle signals in DOS, but if you query a decent search engine for trapping signals in dos or the like, you should be able to find something. starX Yes it can be done with getch(). Getch() return's you ASCII value of character, if you press normal key. When you press smthinng like Ctrl+A firstly getch() returns zero. So you must call getch() once more to get special scancode number. If you want to handle keys like ctrl+a or cursor keys this code tels you what scancode appropiate to your ctrl+smthing: #include <conio.h> #include <stdio.h> main() { char c; printf("Press something!\n"); if((c=getch())==0) { c=getch(); //getch() must be call onesmore to //get scancode printf("You press smthing special\n); printf("Scancode:%d",c); } else { printf("You press normal key\n"); printf("ASCII value:%d",c); } return(0); }
http://cboard.cprogramming.com/brief-history-cprogramming-com/5350-cntrl-getch-printable-thread.html
CC-MAIN-2014-52
refinedweb
280
73.61
Anyone know how to detect if a Point(x, y) is in a Line(x1, y1, x2, y2) or not? Quiet not good at math please...please... This is a discussion on Need help about point and line within the Game Programming forums, part of the General Programming Boards category; Anyone know how to detect if a Point(x, y) is in a Line(x1, y1, x2, y2) or not? Quiet not ... Anyone know how to detect if a Point(x, y) is in a Line(x1, y1, x2, y2) or not? Quiet not good at math please...please... Last edited by audinue; 01-01-2009 at 11:04 AM. Just GET it OFF out my mind!! If the slope from (x1,y1) to (x,y) is the same as the slope from (x1,y1) to (x2,y2). Breaking out the formula for slope gives (x-x1)*(y2-y1) == (x2-x1)*(y-y1). Code:var line = {x1:10, y1:10, x2:100, y2:100}; _root.lineStyle(1); _root.moveTo(line.x1, line.y1); _root.lineTo(line.x2, line.y2); onEnterFrame = function () { var point = {x:_root._xmouse, y:_root._ymouse}; hit_mc._visible = (point.x-line.x1)*(line.y2-line.y1) == (line.x2-line.x1)*(point.y-line.y1); }; Last edited by audinue; 01-01-2009 at 11:16 AM. Just GET it OFF out my mind!! You also need to check that x >= x1 && x <= x2 to make sure it is on the line. The slope extends to +/- infinity, so you need to test for more than just being on the slope.'s point is that if you want if you want the point to be on the line segment strictly between (x1, y1) and (x2, y2) [as opposed to the line between the points that goes forever], then you need to check that x is between x1 and x2 (i.e., that (x1-x)*(x2-x) < 0). What the...What the...Code:boolean i s H i t (Point point, Line line) { return ((point.getX() >= line.getX1()) && (point.getX() <= line.getX2()) && (point.getY() >= line.getY1()) && (point.getY() <= line.getY2())) && ((point.getX() - line.getX1()) * (line.getY2() - line.getY1()) == (line.getX2() - line.getX1()) * (point.getY() - line.getY1())); } Just GET it OFF out my mind!! Compilers can produce warnings - make the compiler programmers happy: Use them! Please don't PM me for help - and no, I don't do help over instant messengers. Heres a version I came up with: Code:int PointIsOnLine(int x1, int y1, int x2, int y2, int x3, int y3) { if(x1 == x2 && y1 == y2) //Not a line return(x1 == x3 && y1 == y3); if(x1 == x2) //No gradient return (y3 == y1); if(y1 == y2) //Infinite gradient return (x3 == x1); float gradient = (float)(y2 - y1) / (float)(x2 - x1); int x01 = x1 - ((float)y1 / gradient +0.5); int x02 = x3 - ((float)y3 / gradient +0.5); return (!(x02-x01)); } int PointIsInLine(int x1, int y1, int x2, int y2, int x3, int y3) { if(! PointIsOnLine(x1, y1, x2, y2, x3, y3)) return 0; if(x3 > MAX(x1, x2) || x3 < MIN(x1, x2)) return 0; return 1; } Hrm. So if your line is from (0, 0) to (100, 1)... There are absolutely no points which are on that line which have integer coordinates, except for the endpoints. So using this sort of algorithm, you'd decide that nothing at all ever intersects that line, even though it's over 100 units long. Using integer coordinates isn't really the best choice for this sort of thing I think Code://try //{ if (a) do { f( b); } while(1); else do { f(!b); } while(1); //} Yeah, the co-ords get rounded: I was using integer maths for this as I was checking pixel values in 2D to see if they sat on a line. I guess generally it would be better to do everything in floating point - that would very simple modification.I was using integer maths for this as I was checking pixel values in 2D to see if they sat on a line. I guess generally it would be better to do everything in floating point - that would very simple modification.Code:int x01 = x1 - ((float)y1 / gradient +0.5); int x02 = x3 - ((float)y3 / gradient +0.5); Edit: although actually it looks as if its rounding the wrong way, lol. Perhaps it should have been: this might fix an old bug I never worked out...this might fix an old bug I never worked out...Code:int x01 = x1 - ((float)y1 / gradient) +0.5; int x02 = x3 - ((float)y3 / gradient) +0.5; Last edited by mike_g; 01-02-2009 at 05:04 PM. But if you do it with floating point, you'll want to do a delta comparison, since floating points rarely compare exactly equal, even when they should. All the buzzt! CornedBeeCornedBee "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law Yeah, thats one of the reasons I have an irrational fear of using floats, lol.
http://cboard.cprogramming.com/game-programming/110692-need-help-about-point-line.html
CC-MAIN-2014-35
refinedweb
850
80.62
ZREMOVE Synopsis ZREMOVE:pc lineref1:lineref2 ,... ZR:pc lineref1:lineref2 ,... Arguments Description The ZREMOVE command operates on the currently loaded routine for the current process. Use ZLOAD to load the current routine. ZLOAD loads the INT code version of a routine. INT code does not count or include preprocessor statements. INT code does not count or include completely blank lines from the MAC version of the routine, whether in the source code or within a multiline comment. Once a routine is loaded, it becomes the currently loaded routine for the current process in all namespaces. Therefore, you can insert or remove lines, display, execute, or unload the currently loaded routine from any namespace, not just the namespace from which it was loaded. You can only use the ZREMOVE command when you enter it from the Terminal or when you call it using an XECUTE command or a $XECUTE function. Specifying ZREMOVE in the body of a routine results in a compile error. Any attempt to execute ZREMOVE from within a routine also generates an error. ZREMOVE has two forms: Without an argument unloads the current routine. With arguments removes one or more lines of ObjectScript source code from the current routine. Without an Argument ZREMOVE without an argument removes (unloads) the currently loaded routine. Following an argumentless ZREMOVE, $ZNAME returns the empty string rather than the name of the current routine, and ZPRINT displays no lines. Because the routine has been removed, you cannot use ZSAVE to save the routine; attempting to do so results in a <COMMAND> error. The following Terminal session shows this operation: USER>ZLOAD myroutine USER>WRITE $ZNAME myroutine USER>ZREMOVE USER>WRITE $ZNAME USER> An argumentless ZREMOVE can specify a postconditional expression. ZREMOVE with an argument can remove all the lines of the current routine, but does not remove the current routine itself. For example, ZREMOVE +1:NonexistentLabel removes all of the lines of the current routine, but you can use ZINSERT to insert new lines and use ZSAVE to save the routine. With Arguments ZREMOVE with arguments erases code lines in the current routine. ZREMOVE lineref1 erases the specified line. ZREMOVE lineref1:lineref2 erases the range for lines starting with the first line reference and ending with the second line reference, inclusive. It advances the edit pointer to immediately after the removed line(s). Therefore a ZREMOVE lineref1 followed by a ZINSERT replaces the specified line. ZREMOVE can remove multiple lines (or multiple ranges) of ObjectScript source code by specifying a comma-separated series of any combination of lineref1 or lineref1:lineref2 arguments. Each specified line or range of lines of code is removed as a separate remove operation in the order specified. You can use ZPRINT to display multiple lines of the currently loaded routine. You can execute the current routine using the DO command. Only the local copy of the routine is affected, not the routine as stored on disk. To store the modified code, you must use the ZSAVE command to save the routine. The following Terminal session shows this operation. This example uses a dummy routine (^myroutine) in which each line sets a variable to a string naming that line: USER>ZLOAD myroutine USER>ZPRINT +8 WRITE "this is line 8",! USER>ZREMOVE +8 USER>PRINT +8 WRITE "this is line 9",! USER>. lineref1 The line to be removed, or the first in a range of lines to be removed. It can take any of the following formats: A label may be longer than 31 characters, but must be unique within the first 31 characters. ZREMOVE matches only the first 31 characters of a specified label. Label names are case-sensitive, and may contain Unicode characters. You can use lineref1 to specify a single line of code to remove. You specify the code line either as an offset from the beginning of the routine (+lineref1) or as an offset from a specified label (label+lineref1). ZREMOVE +7: removes the 7th line counting from the beginning of the routine. ZREMOVE +0: performs no operation, generates no error. ZREMOVE +999: if 999 is greater than the number of lines in the routine, performs no operation, generates no error. ZREMOVE Test1: removes the label line Test1. ZREMOVE Test1+0: removes the label line Test1. ZREMOVE Test1+1: removes the first line following label line Test1. ZREMOVE Test1+999: removes the 999th line following label line Test1. This line may be in another labeled module. If 999 is greater than the number of lines from label Test1 to the end of the routine, performs no operation, generates no error. The INT code lines include all labels, comments, and whitespace found in the MAC version of the routine, with the exception that entirely blank lines in a MAC routine, which are removed by the compiler, are neither displayed nor counted in INT code. Blank lines in a multi-line comment are also removed. The #;, ##;, and /// comments in the MAC code may not appear in the INT code, and thus may affect line counts and offsets. Refer to Comments in MAC Code for Routines and Methods for further details. lineref2 The last line in a range of lines to be removed. Specify lineref2 in any of the formats used for lineref1. The colon prefix (:) is mandatory. You specify a range of lines as +lineref1:+lineref2. ZREMOVE removes the range of lines, inclusive of lineref1 and lineref2. If lineref1 and lineref2 refer to the same line, ZREMOVE removes that single line. If lineref2 appears earlier in the routine code than lineref1, no operation is performed and no error is generated. For example: ZREMOVE +7:+2, ZREMOVE Test1+1:Test1, ZREMOVE Test2:Test1 would perform no operation. Use caution when specifying a label name in lineref2. Label names are case-sensitive. If lineref2 contains a label name that does not exist in the routine, ZREMOVE removes the range of lines from lineref1 through the end of the routine. Examples This command erases the fourth line within the current routine. ZREMOVE +4 This command erases the sixth line after the label Test1; Test1 is counted as the first line. ZREMOVE Test1+6 This command erases lines three through ten, inclusive, within the current routine. ZREMOVE +3:+10 This command erases the label line Test1 through the line that immediately follows it, within the current routine. ZREMOVE Test1:Test1+1 This command erases all of the line from label Test1 through label Test2, inclusive of both labels, within the current routine. ZREMOVE Test1:Test2
https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_CZREMOVE
CC-MAIN-2021-25
refinedweb
1,084
64.3
The TCP VMOD contains functions to control TCP congestion control algorithms, set pacing (rate limiting) and perform logging of protocol-related information. import std; import tcp; sub vcl_recv { # Limit all clients to 1000 KB/s. tcp.set_socket_pace(1000); } import std; import tcp; sub vcl_recv { set req.http.X-Tcp = tcp.congestion_algorithm("bbr"); } Here, the X-Tcp header field will be set to 0 when changing the congestion control algorithm succeeded. Otherwise, it will be -1, indicating an error. See the tcp.congestion_algorithm() function for more information about congestion control algorithms. INT congestion_algorithm(STRING algo) Set the client socket congestion control algorithm to algo. Returns 0 on success, and -1 on error. sub vcl_recv { set req.http.x-tcp = tcp.congestion_algorithm("cubic"); } To see your available algorithms: # sysctl net.ipv4.tcp_available_congestion_control net.ipv4.tcp_available_congestion_control = reno cubic bbr The bbr congestion control algorithm is fairly new requires kernel version 4.9.0 or later. See: VOID dump_info() Write the contents of the TCP_INFO data structure into varnishlog. sub vcl_recv { tcp.dump_info(); } The varnishlog output could look like this: - REAL get_estimated_rtt() Get the estimated round-trip-time for the client socket, measured in milliseconds. sub vcl_recv { if (tcp.get_estimated_rtt() > 300) { std.log("Client is far away!"); } } VOID set_socket_pace(INT pace) Socket pacing is a Linux method for rate limiting TCP connections in a network friendly way. Controls TCP rate limiting for the client connection, where pace is measured in KB/s. The outgoing network interface used must be configured with a supported scheduler, such as fq. sub vcl_recv { # Set client max bandwidth to 1000kb/s for this client, # as long as the current network scheduler supports it: if (tcp.set_socket_pace(1000) != 0) { std.log("Failed to set pacing for client socket!"); } } Servers utilizing rate limiting must change their network scheduler. This can be changed with a sysctl setting: net.core.default_qdisc=fq See:
https://docs.varnish-software.com/varnish-cache-plus/vmods/tcp/
CC-MAIN-2021-31
refinedweb
310
52.46
Audio::Beep - a module to use your computer beeper in fancy ways #functional simple way use Audio::Beep; beep($freq, $milliseconds); #OO more musical way use Audio::Beep; my $beeper = Audio::Beep->new(); # lilypond subset syntax accepted # relative notation is the default # (now correctly implemented) my $music = "g' f bes' c8 f d4 c8 f d4 bes c g f2"; # Pictures at an Exhibition by Modest Mussorgsky $beeper->play( $music ); Plays a customizable beep out of your computer beeper. FREQUENCY is in Hz. Defaults to 440. DURATION is in milliseconds. Defaults to 100. Returns a new "beeper" object. Follow the available options for the new method to be passed in hash fashion. You are free to initialize your player object and then give it to the Audio::Beep object. Player objects come from Audio::Beep submodules (like Audio::Beep::Linux::beep). If you're lazy (as any good programmer should be) you can pass a string as a player, like "Audio::Beep::Linux::PP" or even just "Linux::PP": the method will prepend the Audio::Beep namespace, require the module and call the new method on it for you. The new method will try to look up the best player on your platform if you don't specify one. So the following is all valid: use Audio::Beep; #super lazy (should do the right thing most of the time) my $beeper = Audio::Beep->new(); #still lazy my $beeper2 = Audio::Beep->new(player => 'Linux::PP'); #medium lazy my $beeper3 = Audio::Beep->new( player => 'Audio::Beep::Win32::API' ); #not so lazy, but more versatile require Audio::Beep::Linux::beep; my $beeper4 = Audio::Beep->new( player => Audio::Beep::Linux::beep->new( path => '/home/foo/bin/beep' ) ); Sets the rest in milliseconds between every sound played (and even pause). This is useful for users which computer beeper has problems and would just stick to the first sound played. For example on my PowerbookG3 i have to set this around 120 milliseconds. In that way i can still hear some music. Otherwise is just a long single beep. Plays the "music" written in $music. The accepted format is a subset of syntax. The string is a space separated list of notes to play. See the "NOTATION" section below for more info. Sets the player object that will be used to play your music. See the player option above at the new method for more info. With no parameter it just gives you back the current player. Sets the extra rest between each note. See the rest option above at the new method for more info. With no parameter it gives you back the current rest. The defaults at start are middle octave C and a quarter length. Standard notation is the relative notation. Here is an explanation from Lilypond documentation: If no octave changing marks are used, the basic interval between this and the last note is always taken to be a fourth or less (This distance is determined without regarding alterations; a fisis following a ceses will be put above the ceses) The octave changing marks ' and , can be added to raise or lower the pitch by an extra octave. You can switch from relative to non relative notation (in which you specify for every note the octave) using the \norel and \rel commands (see below) Every note has the following structure: [note][flat|sharp][octave][duration][dots] NB: previous note duration is used if omitted. "Flatness", "Sharpness" and "Dottiness" are reset after each note. A note can be any of [c d e f g a b] or [r] for rest. A sharp note is produced postponing a "is" to the note itself (like "cis" for a C#). A flat note is produced adding a "es" or "s" (so "aes" and "as" are both an A flat). A ' (apostrophe) raise one octave, while a , (comma) lower it. A duration is expressed with a number. A 4 is a beat, a 1 is a whole 4/4 measure. Higher the number, shorter the note. You can add dots after the duration number to add half its length. So a4. is an A note long 1/4 + 1/8 and gis2.. is a G# long 7/8 (1/2 + 1/4 + 1/8) A r note means a rest. You can still use duration and dots parameters. Special commands always begin with a "\". They change the behavior of the parser or the music played. Unlike in the Lilypond original syntax, these commands are embedded between notes so they have a slightly different syntax. You can use this option to change the tempo of the music. The only parameter you can use is a number following the bpm string (like "\bpm144"). BPM stands for Beats Per Minute. The default is 120 BPM. You can also invoke this command as \tempo Switches the relative mode off. From here afterward you have to always specify the octave where the note is. Switches the relative mode on. This is the default. You can transpose all your music up or down some octave. ' (apostrophe) raise octave. , (comma) lowers it. This has effect just if you are in non-relative mode. You can embed comments in your music the Perl way. Everything after a # will be ignored until end of line. my $scale = <<'EOS'; \rel \bpm144 c d e f g a b c2. r4 # a scale going up c b a g f e d c1 # and then down EOS my $music = <<'EOM'; # a Smashing Pumpkins tune \bpm90 \norel \transpose'' d8 a, e a, d a, fis16 d a,8 d a, e a, d a, fis16 d a,8 EOM my $love_will_tear_us_apart = <<'EOLOVE'; # a happier tune \bpm160 d'8 e1 fis4 g8 fis4 e8 d4 b2.. d8 a2.. d8 e1 fis4 g8 fis4 e8 d4 b2.. d8 a1 EOLOVE There should be extra examples in the "music" directory of this tarball. #a louder beep perl -MAudio::Beep -ne 'print and beep(550, 1000) if /ERROR/i' logfile #turn your PC in Hofmann mode (courtesy of Thomas Klausner) perl -MAudio::Beep -e 'beep(21 + rand 1000, rand 300) while 1' #your new music player perl -mAudio::Beep -0777e 'Audio::Beep->new->play(<>)' musicfile Requires either the beep program by Johnathan Nightingale (you should find sources in this tarball) SUID root or you to be root (that's because we need writing access to the /dev/console device). If you don't have the beep program this library will also assume some kernel constants which may vary from kernel to kernel (or not, i'm no kernel expert). Anyway this was tested on a 2.4.20 kernel compiled for i386 and it worked with all 2.4 kernel since. It also works with the 2.6 kernel series. With 2.4 kernels i have problems on my PowerBook G3 (it plays a continous single beep). See the rest method if you'd like to play something anyway. Requires Windows NT, 2000 or XP and the Win32::API module. You can find sources on CPAN or you can install it using ActiveState ppm. No support is available for Windows 95, 98 and ME yet: that would require some assembler and an XS module. IMPORTANT! This IS NOT TESTED ON BSD! It may work, it may not. Try it, let me know what you got. BTW, you need the beep program wrote by Andrew Stevenson. I found it at , but you can even find it at If you are a developer interested in having Audio::Beep working on your platform, you should think about writing a backend module. A backend module for Beep should offer just a couple of methods: NB: FREQUENCY is in Hertz. DURATION in milliseconds This is kinda obvious. Take in the options you like. Keep the hash fashion for parameters, thanks. Plays a single sound. Rests a DURATION amount of time This module works for me, but if someone wants to help here is some cool stuff to do: - an XS Windoze backend (look at the Prima project for some useful code) - test this on BSD (cause it's not tested yet!) Some of course. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~giulienk/Audio-Beep-0.11/Beep.pod
CC-MAIN-2014-10
refinedweb
1,377
73.47
Get Status of a Geocode Job Use the following URL to get the status of a geocode job.. The URL is specified in a link field with an attribute of self. For more information, see Geocode Dataflow Response Description. This example requests resource information for the job with an ID of e14b1d9bd65c4b9d99d267bbb8102ccf that was created by using the Bing Maps Key b1c323ea234b1c323ea234b1c323ea234. This URL supports the following response formats. JSON: application/json XML: application/xml For information about the response, see Geocode Dataflow Response Description. The following code shows how to get the status of a geocode job. This code is part of a complete Geocode Dataflow code sample. To view the complete code sample, see Geocode Dataflow Sample Code. You may also want to read the Geocode Dataflow Walkthrough to get a step-by-step description of how to use the Geocode Dataflow. The walkthrough includes example URLs and HTTP responses. //Checks the status of a dataflow job and defines the URLs to use to download results when the job is completed. //Parameters: // dataflowJobLocation: The URL to use to check status for a job. // key: The Bing Maps Key for this job. The same key is used to create the job and download results. //Return value: A DownloadDetails object that contains the status of the geocode dataflow job (Completed, Pending, Aborted). // When the status is set to Completed, DownloadDetails also contains the links to download the results static DownloadDetails CheckStatus(string dataflowJobLocation, string key) { DownloadDetails statusDetails = new DownloadDetails(); statusDetails.jobStatus = "Pending"; //Build the HTTP Request to get job status UriBuilder uriBuilder = new UriBuilder(dataflowJobLocation + @"?key=" + key + "&output=xml"); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uriBuilder.Uri); request.Method = "GET"; //Submit the request and read the response to get job status and to retrieve the links for // downloading the job results //Note: The following conditional statements make use of the fact that the 'Status' field will // always appear after the 'Link' fields in the HTTP response. using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()) { if (response.StatusCode != HttpStatusCode.OK) throw new Exception ("An HTTP error status code was encountered when checking job status."); using (Stream receiveStream = response.GetResponseStream()) { XmlTextReader reader = new XmlTextReader(receiveStream); while (reader.Read()) { if (reader.IsStartElement()) { if (reader.Name.Equals("Status")) { //return job status statusDetails.jobStatus = reader.ReadString(); return (statusDetails); } else if (reader.Name.Equals("Link")) { //Set the URL location values for retrieving // successful and failed job results reader.MoveToFirstAttribute(); if (reader.Value.Equals("output")) { reader.MoveToNextAttribute(); if (reader.Value.Equals("succeeded")) { statusDetails.suceededlink = reader.ReadString(); } else if (reader.Value.Equals("failed")) { statusDetails.failedlink = reader.ReadString(); } } } } } } } return (statusDetails); } When the request is successful, the following HTTP status code is returned. 200 When the request is not successful, the response returns one of the following HTTP status codes. 400 500 503
https://msdn.microsoft.com/en-us/library/ff701728.aspx
CC-MAIN-2018-30
refinedweb
460
51.24
Product interface. More... Ctor. Definition at line 49 of file Product.cc. Dtor. Definition at line 58 of file Product.cc. The reference package providing the product metadata, if such a package exists. Definition at line 63 of file Product.cc. For installed products the name of the corresponding /etc/products.d entry. Definition at line 116 of file Product.cc. List of packages included in older versions of this product and now dropped. This evaluates the referencePackage weakremover namespace. It actually returns a CapabilitySet, because we support to drop specific versions or version ranges of a package. Use sat::WhatProvides to get the actually installed and available packages matching this list. Definition at line 145 of file Product.cc. Array of installed Products that would be replaced by installing this one. Definition at line 119 of file Product.cc. Vendor specific string denoting the product line. Definition at line 148 of file Product.cc. Untranslated short name like SLES 10 (fallback: name) Definition at line 153 of file Product.cc. The product flavor (LiveCD Demo, FTP edition,...). Definition at line 161 of file Product.cc. Get the product type Well, in an ideal world there is only one base product. It's the installed product denoted by a symlink in /etc/products.d. Definition at line 190 of file Product.cc. The product flags. Definition at line 193 of file Product.cc. The date when this Product goes out of support as indicated by it's medadata. Use hasEOfLife if it's important to distinguish whether the value is not defined in the metadata, or defined but empty/invalid/TBD. Definition at line 200 of file Product.cc. Return whether an EndOfLife value is actually defined in the metadata. A missing value ( false) usually indicates that there will be no EOL, while an empty/invalid value indicates that there will be an EOL date, but it's not yet known (FATE#320699). Definition at line 203 of file Product.cc. Definition at line 206 of file Product.cc. ContentIdentifier of required update repositories. Definition at line 216 of file Product.cc. Whether cident_r is listed as required update repository. Definition at line 229 of file Product.cc. This is the installed product that is also targeted by the /etc/products.d/baseproduct symlink. Definition at line 240 of file Product.cc. This is register.target attribute of a product. Used for registration and filtering service repos. Definition at line 243 of file Product.cc. This is register.release attribute of an installed product. Used for registration. Definition at line 246 of file Product.cc. This is register.flavor attribute of a product. Used for registration. Definition at line 249 of file Product.cc. Retrieve URLs flagged with key_r for this product. This is the most common interface. There are convenience methods for wellknown flags like "releasenotes", "updateurls", "extraurls", "optionalurls" and "smolt" below. Definition at line 254 of file Product.cc. The URL to download the release notes for this product. Definition at line 282 of file Product.cc. The URL for registration. Definition at line 283 of file Product.cc. The URL for SMOLT. Definition at line 284 of file Product.cc. Online updates for the product. They are complementary, not alternatives. #163192 Definition at line 285 of file Product.cc. Additional software for the product They are complementary, not alternatives. Definition at line 286 of file Product.cc. Optional software for the product. (for example. Non OSS repositories) They are complementary, not alternatives. Definition at line 287 of file Product.cc. Directly create a certain kind of ResObject from sat::Solvable. If the sat::Solvables kind is not appropriate, a NULL pointer is returned. Definition at line 118 of file ResObject.h.
https://doc.opensuse.org/projects/libzypp/HEAD/classzypp_1_1Product.html
CC-MAIN-2020-45
refinedweb
624
54.59
weakref and circular references: should I really care? Wednesday, May 12, 2010 While Python has a garbage collector? Update: By some strange brain failure I seemed to have written "imports" rather then "references" in the title originally. They are obviously a bad thing. 3 comments: Stephan Deibel said... In my experience circular import is mostly a problem of actual code breakage: A.py: import B def x(): pass B.py: import A A.x() The call A.x() fails b/c A is not yet fully loaded. It would work, however, if "def x()" were above "import B" in module A.py. Unless you reload or unload modules, there's no real issue w/ the references being circular for modules. They just persist for the life of the process.. So I think the bottom line is that in many types of code you don't need to care, but there are cases where you do need to pay attention to object life cycle in one way or another. Floris Bruynooghe said... Firstly: sorry about the imports thing, I didn't mean to write that. The problems of having circular imports is obvious.. I guess a library needs to be kinder then an application for this and hence try harder to avoid creating cycles or provide ways to break it so that the application writer can decide to let the gc do it's work or break it manually. Michael Foord said... Well, yes - it would be nice if you didn't have to worry about them. In general you *don't* have to worry about (except that garbage collection is now non-deterministic - but this is the case for all *good* implementations of Python anyway ;-). If you aren't using objects with __del__ then don't worry about it. PyPy, and I assume also Jython and IronPython that both use the garbage collection mechanisms of their underlying platforms, *will* collect cycles like this by arbitrarily breaking the cycles. Avoid creating cycles is one of those 'good practises' that is not a hard rule. New comments are not allowed.
http://blog.devork.be/2010/05/weakref-and-circular-imports-should-i.html
CC-MAIN-2019-35
refinedweb
348
74.9
Library tutorials & articles A Console IRC Bot - Introduction - Establishing the Connection - Responding to Commands Introduction I'm planning to write some articles based on questions I get from fellow students. The first one is how to get on IRC with C#? . So, here it is, the first article of (hopefully) many. Let's start by telling what IRC is. This is best done by reading the Internet Relay Chat Protocol RFC. You can find anything you want about the IRC protocol in there. Getting on IRC is as simple as: - Establishing a connection. - Logging in. - Maintaining a connection and reacting to commands. As this is an article on how to establish an IRC connection and work with the commands, I'm not going to spent any time on UI. Therefore this will be a simple Console Application ( cIRC ). We'll make a seperate class for the IRC functions so we could re-use it later when we want to add a UI. using System; using System.Net; using System.Net.Sockets; using System.IO; namespace System.Net { public class IRC { } /* IRC */ } /* System.Net */ have the same error as this guy, i know this was LONG ago but i cannot figure out how to get past this ! can someone please contact me AIM : skatecrashrepeat Alternative Email: Punk123@myway.com PLEASE I NEED THIS DESPERATELY That is not a C# error.. It's actually an IRC error, which means you connected to the server. You need to do a few commands so that you can login/etc. Example: USER Veeresh Veeresh :Veeresh Veeresh\r\n NICK Veeresh\r\n JOIN #quakenet\r\n PRIVMSG #quakenet :Hey, what's up\r\n i tried to run this application, i got following error, plz help me out. NOTICE AUTH :* Looking up your hostname NOTICE AUTH : Checking Ident NOTICE AUTH :** Couldn't look up your hostname . ERROR :Closing Link: by blueyonder2.uk.quakenet.org (Registration Timeout) Cannot read from a closed TextReader. please put it in vb.net form :] or any vb Welcome to Developer Fusion, David! Feel free to ask me anything on my blog if you want more info. This thread is for discussions of A Console IRC Bot.
http://www.developerfusion.com/article/4581/a-console-irc-bot/
crawl-002
refinedweb
366
68.16
Release Notes¶ Version 0.52.0 (30 November, 2020)¶ This release focuses on performance improvements, but also adds some new features and contains numerous bug fixes and stability improvements. Highlights of core performance improvements include: Intel kindly sponsored research and development into producing a new reference count pruning pass. This pass operates at the LLVM level and can prune a number of common reference counting patterns. This will improve performance for two primary reasons: - There will be less pressure on the atomic locks used to do the reference counting. - Removal of reference counting operations permits more inlining and the optimisation passes can in general do more with what is present. (Siu Kwan Lam). Intel also sponsored work to improve the performance of the numba.typed.Listcontainer, particularly in the case of __getitem__and iteration (Stuart Archibald). Superword-level parallelism vectorization is now switched on and the optimisation pipeline has been lightly analysed and tuned so as to be able to vectorize more and more often (Stuart Archibald). Highlights of core feature changes include: - The inspect_cfgmethod on the JIT dispatcher object has been significantly enhanced and now includes highlighted output and interleaved line markers and Python source (Stuart Archibald). - The BSD operating system is now unofficially supported (Stuart Archibald). - Numerous features/functionality improvements to NumPy support, including support for: np.asfarray(Guilherme Leobas) - “subtyping” in record arrays (Lucio Fernandez-Arjona) np.splitand np.array_split(Isaac Virshup) operator.containswith ndarray( @mugoh). np.asarray_chkfinite(Rishabh Varshney). - NumPy 1.19 (Stuart Archibald). - the ndarrayallocators, empty, onesand zeros, accepting a dtypespecified as a string literal (Stuart Archibald). - Booleans are now supported as literal types (Alexey Kozlov). - On the CUDA target: - CUDA 9.0 is now the minimum supported version (Graham Markall). - Support for Unified Memory has been added (Max Katz). - Kernel launch overhead is reduced (Graham Markall). - Cudasim support for mapped array, memcopies and memset has been added (Mike Williams). - Access has been wired in to all libdevice functions (Graham Markall). - Additional CUDA atomic operations have been added (Michael Collison). - Additional math library functions ( frexp, ldexp, isfinite) (Zhihao Yuan). - Support for poweron complex numbers (Graham Markall). Deprecations to note: There are no new deprecations. However, note that “compatibility” mode, which was added some 40 releases ago to help transition from 0.11 to 0.12+, has been removed! Also, the shim to permit the import of jitclass from Numba’s top level namespace has now been removed as per the deprecation schedule. General Enhancements: - PR #5418: Add np.asfarray impl (Guilherme Leobas) - PR #5560: Record subtyping (Lucio Fernandez-Arjona) - PR #5609: Jitclass Infer Spec from Type Annotations (Ethan Pronovost) - PR #5699: Implement np.split and np.array_split (Isaac Virshup) - PR #6015: Adding BooleanLiteral type (Alexey Kozlov) - PR #6027: Support operators inlining in InlineOverloads (Alexey Kozlov) - PR #6038: Closes #6037, fixing FreeBSD compilation (László Károlyi) - PR #6086: Add more accessible version information (Stuart Archibald) - PR #6157: Add pipeline_class argument to @cfunc as supported by @jit. (Arthur Peters) - PR #6262: Support dtype from str literal. (Stuart Archibald) - PR #6271: Support ndarraycontains ( @mugoh) - PR #6295: Enhance inspect_cfg (Stuart Archibald) - PR #6304: Support NumPy 1.19 (Stuart Archibald) - PR #6309: Add suitable file search path for BSDs. (Stuart Archibald) - PR #6341: Re roll 6279 (Rishabh Varshney and Valentin Haenel) Performance Enhancements: - PR #6145: Patch to fingerprint namedtuples. (Stuart Archibald) - PR #6202: Speed up str(int) (Stuart Archibald) - PR #6261: Add np.ndarray.ptp() support. (Stuart Archibald) - PR #6266: Use custom LLVM refcount pruning pass (Siu Kwan Lam) - PR #6275: Switch on SLP vectorize. (Stuart Archibald) - PR #6278: Improve typed list performance. (Stuart Archibald) - PR #6335: Split optimisation passes. (Stuart Archibald) - PR #6455: Fix refprune on obfuscated refs and stabilize optimisation WRT wrappers. (Stuart Archibald) Fixes: - PR #5639: Make UnicodeType inherit from Hashable (Stuart Archibald) - PR #6006: Resolves incorrectly hoisted list in parfor. (Todd A. Anderson) - PR #6126: fix version_info if version can not be determined (Valentin Haenel) - PR #6137: Remove references to Python 2’s long (Eric Wieser) - PR #6139: Use direct syntax instead of the add_metaclassdecorator (Eric Wieser) - PR #6140: Replace calls to utils.iteritems(d) with d.items() (Eric Wieser) - PR #6141: Fix #6130 objmode cache segfault (Siu Kwan Lam) - PR #6156: Remove callers of reraisein favor of using with_tracebackdirectly (Eric Wieser) - PR #6162: Move charseq support out of init (Stuart Archibald) - PR #6165: #5425 continued (Amos Bird and Stuart Archibald) - PR #6166: Remove Python 2 compatibility from numba.core.utils (Eric Wieser) - PR #6185: Better error message on NotDefinedError (Luiz Almeida) - PR #6194: Remove recursion from traverse_types (Radu Popovici) - PR #6200: Workaround #5973 (Stuart Archibald) - PR #6203: Make find_callname only lookup functions that are likely part of NumPy. (Stuart Archibald) - PR #6204: Fix unicode kind selection for getitem. (Stuart Archibald) - PR #6206: Build all extension modules with -g -Wall -Werror on Linux x86, provide -O0 flag option (Graham Markall) - PR #6212: Fix for objmode recompilation issue (Alexey Kozlov) - PR #6213: Fix #6177. Remove AOT dependency on the Numba package (Siu Kwan Lam) - PR #6224: Add support for tuple concatenation to array analysis. (#5396 continued) (Todd A. Anderson) - PR #6231: Remove compatibility mode (Graham Markall) - PR #6254: Fix win-32 hashing bug (from Stuart Archibald) (Ray Donnelly) - PR #6265: Fix #6260 (Stuart Archibald) - PR #6267: speed up a couple of really slow unittests (Stuart Archibald) - PR #6281: Remove numba.jitclass shim as per deprecation schedule. (Stuart Archibald) - PR #6294: Make return type propagate to all return variables (Andreas Sodeur) - PR #6300: Un-skip tests that were skipped because of #4026. (Owen Anderson) - PR #6307: Remove restrictions on SVML version due to bug in LLVM SVML CC (Stuart Archibald) - PR #6316: Make IR inliner tests not self mutating. (Stuart Archibald) - PR #6318: PR #5892 continued (Todd A. Anderson, via Stuart Archibald) - PR #6319: Permit switching off boundschecking when debug is on. (Stuart Archibald) - PR #6324: PR 6208 continued (Ivan Butygin and Stuart Archibald) - PR #6337: Implements keyon types.TypeRef(Andreas Sodeur) - PR #6354: Bump llvmlite to 0.35. series. (Stuart Archibald) - PR #6357: Fix enumerate invalid decref (Siu Kwan Lam) - PR #6359: Fixes typed list indexing on 32bit (Stuart Archibald) - PR #6378: Fix incorrect CPU override in vectorization test. (Stuart Archibald) - PR #6379: Use O0 to enable inline and not affect loop-vectorization by later O3… (Siu Kwan Lam) - PR #6384: Fix failing tests to match on platform invariant int spelling. (Stuart Archibald) - PR #6390: Updates inspect_cfg (Stuart Archibald) - PR #6396: Remove hard dependency on tbb package. (Stuart Archibald) - PR #6408: Don’t do array analysis for tuples that contain arrays. (Todd A. Anderson) - PR #6441: Fix ASCII flag in Unicode slicing (0.52.0rc2 regression) (Ehsan Totoni) - PR #6442: Fix array analysis regression in 0.52 RC2 for tuple of 1D arrays (Ehsan Totoni) - PR #6446: Fix #6444: pruner issues with reference stealing functions (Siu Kwan Lam) - PR #6450: Fix asfarray kwarg default handling. (Stuart Archibald) - PR #6486: fix abstract base class import (Valentin Haenel) - PR #6487: Restrict maximum version of python (Siu Kwan Lam) - PR #6527: setup.py: fix py version guard (Chris Barnes) CUDA Enhancements/Fixes: - PR #5465: Remove macro expansion and replace uses with FE typing + BE lowering (Graham Markall) - PR #5741: CUDA: Add two-argument implementation of round() (Graham Markall) - PR #5900: Enable CUDA Unified Memory (Max Katz) - PR #6042: CUDA: Lower launch overhead by launching kernel directly (Graham Markall) - PR #6064: Lower math.frexp and math.ldexp in numba.cuda (Zhihao Yuan) - PR #6066: Lower math.isfinite in numba.cuda (Zhihao Yuan) - PR #6092: CUDA: Add mapped_array_like and pinned_array_like (Graham Markall) - PR #6127: Fix race in reduction kernels on Volta, require CUDA 9, add syncwarp with default mask (Graham Markall) - PR #6129: Extend Cudasim to support most of the memory functionality. (Mike Williams) - PR #6150: CUDA: Turn on flake8 for cudadrv and fix errors (Graham Markall) - PR #6152: CUDA: Provide wrappers for all libdevice functions, and fix typing of math function (#4618) (Graham Markall) - PR #6227: Raise exception when no supported architectures are found (Jacob Tomlinson) - PR #6244: CUDA Docs: Make workflow using simulator more explicit (Graham Markall) - PR #6248: Add support for CUDA atomic subtract operations (Michael Collison) - PR #6289: Refactor atomic test cases to reduce code duplication (Michael Collison) - PR #6290: CUDA: Add support for complex power (Graham Markall) - PR #6296: Fix flake8 violations in numba.cuda module (Graham Markall) - PR #6297: Fix flake8 violations in numba.cuda.tests.cudapy module (Graham Markall) - PR #6298: Fix flake8 violations in numba.cuda.tests.cudadrv (Graham Markall) - PR #6299: Fix flake8 violations in numba.cuda.simulator (Graham Markall) - PR #6306: Fix flake8 in cuda atomic test from merge. (Stuart Archibald) - PR #6325: Refactor code for atomic operations (Michael Collison) - PR #6329: Flake8 fix for a CUDA test (Stuart Archibald) - PR #6331: Explicitly state that NUMBA_ENABLE_CUDASIM needs to be set before import (Graham Markall) - PR #6340: CUDA: Fix #6339, performance regression launching specialized kernels (Graham Markall) - PR #6380: Only test managed allocations on Linux (Graham Markall) Documentation Updates: - PR #6090: doc: Add doc on direct creation of Numba typed-list ( @rht) - PR #6110: Update CONTRIBUTING.md (Stuart Archibald) - PR #6128: CUDA Docs: Restore Dispatcher.forall() docs (Graham Markall) - PR #6277: fix: cross2d wrong doc. reference (issue #6276) ( @jeertmans) - PR #6282: Remove docs on Python 2(.7) EOL. (Stuart Archibald) - PR #6283: Add note on how public CI is impl and what users can do to help. (Stuart Archibald) - PR #6292: Document support for structured array attribute access (Graham Markall) - PR #6310: Declare unofficial *BSD support (Stuart Archibald) - PR #6342: Fix docs on literally usage. (Stuart Archibald) - PR #6348: doc: fix typo in jitclass.rst (“initilising” -> “initialising”) ( @muxator) - PR #6362: Move llvmlite support in README to 0.35 (Stuart Archibald) - PR #6363: Note that reference counted types are not permitted in set(). (Stuart Archibald) - PR #6364: Move deprecation schedules for 0.52 (Stuart Archibald) CI/Infrastructure Updates: - PR #6252: Show channel URLs (Siu Kwan Lam) - PR #6338: Direct user questions to Discourse instead of the Google Group. (Stan Seibert) - PR #6474: Add skip on PPC64LE for tests causing SIGABRT in LLVM. (Stuart Archibald) Authors: - Alexey Kozlov - Amos Bird - Andreas Sodeur - Arthur Peters - Chris Barnes - Ehsan Totoni (core dev) - Eric Wieser - Ethan Pronovost - Graham Markall - Guilherme Leobas - Isaac Virshup - Ivan Butygin - Jacob Tomlinson - Luiz Almeida - László Károlyi - Lucio Fernandez-Arjona - Max Katz - Michael Collison - Mike Williams - Owen Anderson - Radu Popovici - Ray Donnelly - Rishabh Varshney - Siu Kwan Lam (core dev) - Stan Seibert (core dev) - Stuart Archibald (core dev) - Todd A. Anderson (core dev) - Valentin Haenel (core dev) - Zhihao Yuan @jeertmans @mugoh @muxator @rht Version 0.51.2 (September 2, 2020)¶ This is a bugfix release for 0.51.1. It fixes a critical performance bug in the CFG back edge computation algorithm that leads to exponential time complexity arising in compilation for use cases with certain pathological properties. - PR #6195: PR 6187 Continue. Don’t visit already checked successors Authors: - Graham Markall - Siu Kwan Lam (core dev) Version 0.51.1 (August 26, 2020)¶ This is a bugfix release for 0.51.0, it fixes a critical bug in caching, another critical bug in the CUDA target initialisation sequence and also fixes some compile time performance regressions: - PR #6141: Fix #6130 objmode cache segfault - PR #6146: Fix compilation slowdown due to controlflow analysis - PR #6147: CUDA: Don’t make a runtime call on import - PR #6153: Fix for #6151. Make UnicodeCharSeq into str for comparison. - PR #6168: Fix Issue #6167: Failure in test_cuda_submodules Authors: - Graham Markall - Siu Kwan Lam (core dev) - Stuart Archibald (core dev) Version 0.51.0 (August 12, 2020)¶ This release continues to add new features to Numba and also contains a significant number of bug fixes and stability improvements. Highlights of core feature changes include: - The compilation chain is now based on LLVM 10 (Valentin Haenel). - Numba has internally switched to prefer non-literal types over literal ones so as to reduce function over-specialisation, this with view of speeding up compile times (Siu Kwan Lam). - On the CUDA target: Support for CUDA Toolkit 11, Ampere, and Compute Capability 8.0; Printing of SASScode for kernels; Callbacks to Python functions can be inserted into CUDA streams, and streams are async awaitable; Atomic nanminand nanmaxfunctions are added; Fixes for various miscompilations and segfaults. (mostly Graham Markall; call backs on streams by Peter Würtz). Intel also kindly sponsored research and development that lead to some exciting new features: - Support for heterogeneous immutable lists and heterogeneous immutable string key dictionaries. Also optional initial/construction value capturing for all lists and dictionaries containing literal values (Stuart Archibald). - A new pass-by-reference mutable structure extension type StructRef(Siu Kwan Lam). - Object mode blocks are now cacheable, with the side effect of numerous bug fixes and performance improvements in caching. This also permits caching of functions defined in closures (Siu Kwan Lam). Deprecations to note: To align with other targets, the argtypes and restypes kwargs to @cuda.jit are now deprecated, the bind kwarg is also deprecated. Further the target kwarg to the numba.jit decorator family is deprecated. General Enhancements: - PR #5463: Add str(int) impl - PR #5526: Impl. np.asarray(literal) - PR #5619: Add support for multi-output ufuncs - PR #5711: Division with timedelta input - PR #5763: Support minlength argument to np.bincount - PR #5779: Return zero array from np.dot when the arguments are empty. - PR #5796: Add implementation for np.positive - PR #5849: Setitem for records when index is StringLiteral, including literal unroll - PR #5856: Add support for conversion of inplace_binop to parfor. - PR #5893: Allocate 1D iteration space one at a time for more even distribution. - PR #5922: Reduce objmode and unpickling overhead - PR #5944: re-enable OpenMP in wheels - PR #5946: Implement literal dictionaries and lists. - PR #5956: Update numba_sysinfo.py - PR #5978: Add structref as a mutable struct that is pass-by-ref - PR #5980: Deprecate target kwarg for numba.jit. - PR #6058: Add prefer_literal option to overload API Fixes: - PR #5674: Fix #3955. Allow with objmode to be cached - PR #5724: Initialize process lock lazily to prevent multiprocessing issue - PR #5783: Make np.divide and np.remainder code more similar - PR #5808: Fix 5665 Block jit(nopython=True, forceobj=True) and suppress njit(forceobj=True) - PR #5834: Fix the is operator on Ellipsis - PR #5838: Ensure Dispatcher.__eq__always returns a bool - PR #5841: cleanup: Use PythonAPI.bool_from_bool in more places - PR #5862: Do not leak loop iteration variables into the numba.np.npyimpl namespace - PR #5869: Update repomap - PR #5879: Fix erroneous input mutation in linalg routines - PR #5882: Type check function in jit decorator - PR #5925: Use np.inf and -np.inf for max and min float values respectively. - PR #5935: Fix default arguments with multiprocessing - PR #5952: Fix “Internal error … local variable ‘errstr’ referenced before assignment during BoundFunction(…)” - PR #5962: Fix SVML tests with LLVM 10 and AVX512 - PR #5972: fix flake8 for numba/runtests.py - PR #5995: Update setup.py with new llvmlite versions - PR #5996: Set lower bound for llvmlite to 0.33 - PR #6004: Fix problem in branch pruning with LiteralStrKeyDict - PR #6017: Fixing up numba_do_raise - PR #6028: Fix #6023 - PR #6031: Continue 5821 - PR #6035: Fix overspecialize of literal - PR #6046: Fixes statement reordering bug in maximize fusion step. - PR #6056: Fix issue on invalid inlining of non-empty build_list by inline_arraycall - PR #6057: fix aarch64/python_3.8 failure on master - PR #6070: Fix overspecialized containers - PR #6071: Remove f-strings in setup.py - PR #6072: Fix for #6005 - PR #6073: Fixes invalid C prototype in helper function. - PR #6078: Duplicate NumPy’s PyArray_DescrCheck macro - PR #6081: Fix issue with cross drive use and relpath. - PR #6083: Fix bug in initial value unify. - PR #6087: remove invalid sanity check from randrange tests - PR #6089: Fix invalid reference to TypingError - PR #6097: Add function code and closure bytes into cache key - PR #6099: Restrict upper limit of TBB version due to ABI changes. - PR #6101: Restrict lower limit of icc_rt version due to assumed SVML bug. - PR #6107: Fix and test #6095 - PR #6109: Fixes an issue reported in #6094 - PR #6111: Decouple LiteralList and LiteralStrKeyDict from tuple - PR #6116: Fix #6102. Problem with non-unique label. CUDA Enhancements/Fixes: - PR #5359: Remove special-casing of 0d arrays - PR #5709: CUDA: Refactoring of cuda.jit and kernel / dispatcher abstractions - PR #5732: CUDA Docs: document forallmethod of kernels - PR #5745: CUDA stream callbacks and async awaitable streams - PR #5761: Add implmentation for int types for isnan and isinf for CUDA - PR #5819: Add support for CUDA 11 and Ampere / CC 8.0 - PR #5826: CUDA: Add function to get SASS for kernels - PR #5846: CUDA: Allow disabling NVVM optimizations, and fix debug issues - PR #5851: CUDA EMM enhancements - add default get_ipc_handle implementation, skip a test conditionally - PR #5852: CUDA: Fix cuda.test() - PR #5857: CUDA docs: Add notes on resetting the EMM plugin - PR #5859: CUDA: Fix reduce docs and style improvements - PR #6016: Fixes change of list spelling in a cuda test. - PR #6020: CUDA: Fix #5820, adding atomic nanmin / nanmax - PR #6030: CUDA: Don’t optimize IR before sending it to NVVM - PR #6052: Fix dtype for atomic_add_double testsuite - PR #6080: CUDA: Prevent auto-upgrade of atomic intrinsics - PR #6123: Fix #6121 Documentation Updates: - PR #5782: Host docs on Read the Docs - PR #5830: doc: Mention that caching uses pickle - PR #5963: Fix broken link to numpy ufunc signature docs - PR #5975: restructure communication section - PR #5981: Document bounds-checking behavior in python deviations page - PR #5993: Docs for structref - PR #6008: Small fix so bullet points are rendered by sphinx - PR #6013: emphasize cuda kernel functions are asynchronous - PR #6036: Update deprecation doc from numba.errors to numba.core.errors - PR #6062: Change references to numba.pydata.org to https CI updates: - PR #5850: Updates the “New Issue” behaviour to better redirect users. - PR #5940: Add discourse badge - PR #5960: Setting mypy on CI Enhancements from user contributed PRs (with thanks!): - Aisha Tammy added the ability to switch off TBB support at compile time in #5821 (continued in #6031 by Stuart Archibald). - Alexander Stiebing fixed a reference before assignment bug in #5952. - Alexey Kozlov fixed a bug in tuple getitem for literals in #6028. - Andrew Eckart updated the repomap in #5869, added support for Read the Docs in #5782, fixed a bug in the np.dotimplementation to correctly handle empty arrays in #5779 and added support for minlengthto np.bincountin #5763. @bitsisbitsupdated numba_sysinfo.pyto handle HSA agents correctly in #5956. - Daichi Suzuo Fixed a bug in the threading backend initialisation sequence such that it is now correctly a lazy lock in #5724. - Eric Wieser contributed a number of patches, particularly in enhancing and improving the ufunccapabilities: - #5359: Remove special-casing of 0d arrays - #5834: Fix the is operator on Ellipsis - #5619: Add support for multi-output ufuncs - #5841: cleanup: Use PythonAPI.bool_from_bool in more places - #5862: Do not leak loop iteration variables into the numba.np.npyimpl namespace - #5838: Ensure Dispatcher.__eq__always returns a bool - #5830: doc: Mention that caching uses pickle - #5783: Make np.divide and np.remainder code more similar - Ethan Pronovost added a guard to prevent the common mistake of applying a jit decorator to the same function twice in #5881. - Graham Markall contributed many patches to the CUDA target, as follows: - #6052: Fix dtype for atomic_add_double tests - #6030: CUDA: Don’t optimize IR before sending it to NVVM - #5846: CUDA: Allow disabling NVVM optimizations, and fix debug issues - #5826: CUDA: Add function to get SASS for kernels - #5851: CUDA EMM enhancements - add default get_ipc_handle implementation, skip a test conditionally - #5709: CUDA: Refactoring of cuda.jit and kernel / dispatcher abstractions - #5819: Add support for CUDA 11 and Ampere / CC 8.0 - #6020: CUDA: Fix #5820, adding atomic nanmin / nanmax - #5857: CUDA docs: Add notes on resetting the EMM plugin - #5859: CUDA: Fix reduce docs and style improvements - #5852: CUDA: Fix cuda.test() - #5732: CUDA Docs: document forallmethod of kernels - Guilherme Leobas added support for str(int)in #5463 and np.asarray(literal value)`in #5526. - Hameer Abbasi deprecated the targetkwarg for numba.jitin #5980. - Hannes Pahl added a badge to the Numba github page linking to the new discourse forum in #5940 and also fixed a bug that permitted illegal combinations of flags to be passed into @jitin #5808. - Kayran Schmidt emphasized that CUDA kernel functions are asynchronous in the documentation in #6013. - Leonardo Uieda fixed a broken link to the NumPy ufunc signature docs in #5963. - Lucio Fernandez-Arjona added mypy to CI and started adding type annotations to the code base in #5960, also fixed a (de)serialization problem on the dispatcher in #5935, improved the undefined variable error message in #5876, added support for division with timedelta input in #5711 and implemented setitemfor records when the index is a StringLiteralin #5849. - Ludovic Tiako documented Numba’s bounds-checking behavior in the python deviations page in #5981. - Matt Roeschke changed all httpreferences httpsin #6062. @niteya-shahimplemented isnanand isinffor integer types on the CUDA target in #5761 and implemented np.positivein #5796. - Peter Würtz added CUDA stream callbacks and async awaitable streams in #5745. @rhtfixed an invalid import referred to in the deprecation documentation in #6036. - Sergey Pokhodenko updated the SVML tests for LLVM 10 in #5962. - Shyam Saladi fixed a Sphinx rendering bug in #6008. Authors: - Aisha Tammy - Alexander Stiebing - Alexey Kozlov - Andrew Eckart @bitsisbits - Daichi Suzuo - Eric Wieser - Ethan Pronovost - Graham Markall - Guilherme Leobas - Hameer Abbasi - Hannes Pahl - Kayran Schmidt - Kozlov, Alexey - Leonardo Uieda - Lucio Fernandez-Arjona - Ludovic Tiako - Matt Roeschke @niteya-shah - Peter Würtz - Sergey Pokhodenko - Shyam Saladi @rht - Siu Kwan Lam (core dev) - Stuart Archibald (core dev) - Todd A. Anderson (core dev) - Valentin Haenel (core dev) Version 0.50.1 (Jun 24, 2020)¶ This is a bugfix release for 0.50.0, it fixes a critical bug in error reporting and a number of other smaller issues: - PR #5861: Added except for possible Windows get_terminal_size exception - PR #5876: Improve undefined variable error message - PR #5884: Update the deprecation notices for 0.50.1 - PR #5889: Fixes literally not forcing re-dispatch for inline=’always’ - PR #5912: Fix bad attr access on certain typing templates breaking exceptions. - PR #5918: Fix cuda test due to #5876 Authors: @pepping_dore - Lucio Fernandez-Arjona - Siu Kwan Lam (core dev) - Stuart Archibald (core dev) Version 0.50.0 (Jun 10, 2020)¶ This is a more usual release in comparison to the others that have been made in the last six months. It comprises the result of a number of maintenance tasks along with some new features and a lot of bug fixes. Highlights of core feature changes include: - The compilation chain is now based on LLVM 9. - The error handling and reporting system has been improved to reduce the size of error messages, and also improve quality and specificity. - The CUDA target has more stream constructors available and a new function for compiling to PTX without linking and loading the code to a device. Further, the macro-based system for describing CUDA threads and blocks has been replaced with standard typing and lowering implementations, for improved debugging and extensibility. IMPORTANT: The backwards compatibility shim, that was present in 0.49.x to accommodate the refactoring of Numba’s internals, has been removed. If a module is imported from a moved location an ImportError will occur. General Enhancements: - PR #5060: Enables np.sum for timedelta64 - PR #5225: Adjust interpreter to make conditionals predicates via bool() call. - PR #5506: Jitclass static methods - PR #5580: Revert shim - PR #5591: Fix #5525 Add figure for total memory to numba -soutput. - PR #5616: Simplify the ufunc kernel registration - PR #5617: Remove /examples from the Numba repo. - PR #5673: Fix inliners to run all passes on IR and clean up correctly. - PR #5700: Make it easier to understand type inference: add SSA dump, use for DEBUG_TYPEINFER - PR #5702: Fixes for LLVM 9 - PR #5722: Improve error messages. - PR #5758: Support NumPy 1.18 Fixes: - PR #5390: add error handling for lookup_module - PR #5464: Jitclass drops annotations to avoid error - PR #5478: Fix #5471. Issue with omitted type not recognized as literal value. - PR #5517: Fix numba.typed.List extend for singleton and empty iterable - PR #5549: Check type getitem - PR #5568: Add skip to entrypoint test on windows - PR #5581: Revert #5568 - PR #5602: Fix segfault caused by pop from numba.typed.List - PR #5645: Fix SSA redundant CFG computation - PR #5686: Fix issue with SSA not minimal - PR #5689: Fix bug in unified_function_type (issue 5685) - PR #5694: Skip part of slice array analysis if any part is not analyzable. - PR #5697: Fix usedef issue with parfor loopnest variables. - PR #5705: A fix for cases where SSA looks like a reduction variable. - PR #5714: Fix bug in test - PR #5717: Initialise Numba extensions ahead of any compilation starting. - PR #5721: Fix array iterator layout. - PR #5738: Unbreak master on buildfarm - PR #5757: Force LLVM to use ZMM registers for vectorization. - PR #5764: fix flake8 errors - PR #5768: Interval example: fix import - PR #5781: Moving record array examples to a test module - PR #5791: Fix up no cgroups problem - PR #5795: Restore refct removal pass and make it strict - PR #5807: Skip failing test on POWER8 due to PPC CTR Loop problem. - PR #5812: Fix side issue from #5792, @overload inliner cached IR being mutated. - PR #5815: Pin llvmlite to 0.33 - PR #5833: Fixes the source location appearing incorrectly in error messages. CUDA Enhancements/Fixes: - PR #5347: CUDA: Provide more stream constructors - PR #5388: CUDA: Fix OOB write in test_round{f4,f8} - PR #5437: Fix #5429: Exception using .get_ipc_handle(...)on array from as_cuda_array(...) - PR #5481: CUDA: Replace macros with typing and lowering implementations - PR #5556: CUDA: Make atomic semantics match Python / NumPy, and fix #5458 - PR #5558: CUDA: Only release primary ctx if retained - PR #5561: CUDA: Add function for compiling to PTX (+ other small fixes) - PR #5573: CUDA: Skip tests under cuda-memcheck that hang it - PR #5578: Implement math.modf for CUDA target - PR #5704: CUDA Eager compilation: Fix max_registers kwarg - PR #5718: CUDA lib path tests: unset CUDA_PATH when CUDA_HOME unset - PR #5800: Fix LLVM 9 IR for NVVM - PR #5803: CUDA Update expected error messages to fix #5797 Documentation Updates: - PR #5546: DOC: Add documentation about cost model to inlining notes. - PR #5653: Update doc with respect to try-finally case Enhancements from user contributed PRs (with thanks!): - Elias Kuthe fixed in issue with imports in the Interval example in #5768 - Eric Wieser Simplified the ufunc kernel registration mechanism in #5616 - Ethan Pronovost patched a problem with __annotations__in jitclassin #5464, fixed a bug that lead to infinite loops in Numba’s Type.__getitem__in #5549, fixed a bug in np.arangetesting in #5714 and added support for @staticmethodto jitclassin #5506. - Gabriele Gemmi implemented math.modffor the CUDA target in #5578 - Graham Markall contributed many patches, largely to the CUDA target, as follows: - #5347: CUDA: Provide more stream constructors - #5388: CUDA: Fix OOB write in test_round{f4,f8} - #5437: Fix #5429: Exception using .get_ipc_handle(...)on array from as_cuda_array(...) - #5481: CUDA: Replace macros with typing and lowering implementations - #5556: CUDA: Make atomic semantics match Python / NumPy, and fix #5458 - #5558: CUDA: Only release primary ctx if retained - #5561: CUDA: Add function for compiling to PTX (+ other small fixes) - #5573: CUDA: Skip tests under cuda-memcheck that hang it - #5648: Unset the memory manager after EMM Plugin tests - #5700: Make it easier to understand type inference: add SSA dump, use for DEBUG_TYPEINFER - #5704: CUDA Eager compilation: Fix max_registers kwarg - #5718: CUDA lib path tests: unset CUDA_PATH when CUDA_HOME unset - #5800: Fix LLVM 9 IR for NVVM - #5803: CUDA Update expected error messages to fix #5797 - Guilherme Leobas updated the documentation surrounding try-finally in #5653 - Hameer Abbasi added documentation about the cost model to the notes on inlining in #5546 - Jacques Gaudin rewrote numba -sto produce and consume a dictionary of output about the current system in #5591 - James Bourbeau Updated min/argmin and max/argmax to handle non-leading nans (via #5758) - Lucio Fernandez-Arjona moved the record array examples to a test module in #5781 and added np.timedelta64handling to np.sumin #5060 - Pearu Peterson Fixed a bug in unified_function_type in #5689 - Sergey Pokhodenko fixed an issue impacting LLVM 10 regarding vectorization widths on Intel SkyLake processors in #5757 - Shan Sikdar added error handling for lookup_modulein #5390 - @toddrme2178 add CI testing for NumPy 1.18 (via #5758) Authors: - Elias Kuthe - Eric Wieser - Ethan Pronovost - Gabriele Gemmi - Graham Markall - Guilherme Leobas - Hameer Abbasi - Jacques Gaudin - James Bourbeau - Lucio Fernandez-Arjona - Pearu Peterson - Sergey Pokhodenko - Shan Sikdar - Siu Kwan Lam (core dev) - Stuart Archibald (core dev) - Todd A. Anderson (core dev) @toddrme2178 - Valentin Haenel (core dev) Version 0.49.1 (May 7, 2020)¶ This is a bugfix release for 0.49.0, it fixes some residual issues with SSA form, a critical bug in the branch pruning logic and a number of other smaller issues: - PR #5587: Fixed #5586 Threading Implementation Typos - PR #5592: Fixes #5583 Remove references to cffi_support from docs and examples - PR #5614: Fix invalid type in resolve for comparison expr in parfors. - PR #5624: Fix erroneous rewrite of predicate to bit const on prune. - PR #5627: Fixes #5623, SSA local def scan based on invalid equality assumption. - PR #5629: Fixes naming error in array_exprs - PR #5630: Fix #5570. Incorrect race variable detection due to SSA naming. - PR #5638: Make literal_unroll function work as a freevar. - PR #5648: Unset the memory manager after EMM Plugin tests - PR #5651: Fix some SSA issues - PR #5652: Pin to sphinx=2.4.4 to avoid problem with C declaration - PR #5658: Fix unifying undefined first class function types issue - PR #5669: Update example in 5m guide WRT SSA type stability. - PR #5676: Restore numba.typesas public API Authors: - Graham Markall - Juan Manuel Cruz Martinez - Pearu Peterson - Sean Law - Stuart Archibald (core dev) - Siu Kwan Lam (core dev) Version 0.49.0 (Apr 16, 2020)¶ This release is very large in terms of code changes. Large scale removal of unsupported Python and NumPy versions has taken place along with a significant amount of refactoring to simplify the Numba code base to make it easier for contributors. Numba’s intermediate representation has also undergone some important changes to solve a number of long standing issues. In addition some new features have been added and a large number of bugs have been fixed! IMPORTANT: In this release Numba’s internals have moved about a lot. A backwards compatibility “shim” is provided for this release so as to not immediately break projects using Numba’s internals. If a module is imported from a moved location the shim will issue a deprecation warning and suggest how to update the import statement for the new location. The shim will be removed in 0.50.0! Highlights of core feature changes include: - Removal of all Python 2 related code and also updating the minimum supported Python version to 3.6, the minimum supported NumPy version to 1.15 and the minimum supported SciPy version to 1.0. (Stuart Archibald). - Refactoring of the Numba code base. The code is now organised into submodules by functionality. This cleans up Numba’s top level namespace. (Stuart Archibald). - Introduction of an ir.Delfree static single assignment form for Numba’s intermediate representation (Siu Kwan Lam and Stuart Archibald). - An OpenMP-like thread masking API has been added for use with code using the parallel CPU backends (Aaron Meurer and Stuart Archibald). - For the CUDA target, all kernel launches now require a configuration, this preventing accidental launches of kernels with the old default of a single thread in a single block. The hard-coded autotuner is also now removed, such tuning is deferred to CUDA API calls that provide the same functionality (Graham Markall). - The CUDA target also gained an External Memory Management plugin interface to allow Numba to use another CUDA-aware library for all memory allocations and deallocations (Graham Markall). - The Numba Typed List container gained support for construction from iterables (Valentin Haenel). - Experimental support was added for first-class function types (Pearu Peterson). Enhancements from user contributed PRs (with thanks!): - Aaron Meurer added support for thread masking at runtime in #4615. - Andreas Sodeur fixed a long standing bug that was preventing cProfilefrom working with Numba JIT compiled functions in #4476. - Arik Funke fixed error messages in test_array_reductions(#5278), fixed an issue with test discovery (#5239), made it so the documentation would build again on windows (#5453) and fixed a nested list problem in the docs in #5489. - Antonio Russo fixed a SyntaxWarning in #5252. - Eric Wieser added support for inferring the types of object arrays (#5348) and iterating over 2D arrays (#5115), also fixed some compiler warnings due to missing (void) in #5222. Also helped improved the “shim” and associated warnings in #5485, #5488, #5498 and partly #5532. - Ethan Pronovost fixed a problem with the shim erroneously warning for jitclass use in #5454 and also prevented illegal return values in jitclass __init__in #5505. - Gabriel Majeri added SciPy 2019 talks to the docs in #5106. - Graham Markall changed the Numba HTML documentation theme to resolve a number of long standing issues in #5346. Also contributed were a large number of CUDA enhancements and fixes, namely: - #5519: CUDA: Silence the test suite - Fix #4809, remove autojit, delete prints - #5443: Fix #5196: Docs: assert in CUDA only enabled for debug - #5436: Fix #5408: test_set_registers_57 fails on Maxwell - #5423: Fix #5421: Add notes on printing in CUDA kernels - #5400: Fix #4954, and some other small CUDA testsuite fixes - #5328: NBEP 7: External Memory Management Plugin Interface - #5144: Fix #4875: Make #2655 test with debug expect to pass - #5323: Document lifetime semantics of CUDA Array Interface - #5061: Prevent kernel launch with no configuration, remove autotuner - #5099: Fix #5073: Slices of dynamic shared memory all alias - #5136: CUDA: Enable asynchronous operations on the default stream - #5085: Support other itemsizes with view - #5059: Docs: Explain how to use Memcheck with Numba, fixups in CUDA documentation - #4957: Add notes on overwriting gufunc inputs to docs - Greg Jennings fixed an issue with np.random.choicenot acknowledging the RNG seed correctly in #3897/#5310. - Guilherme Leobas added support for np.isnatin #5293. - Henry Schreiner made the llvmlite requirements more explicit in requirements.txt in #5150. - Ivan Butygin helped fix an issue with parfors sequential lowering in #5114/#5250. - Jacques Gaudin fixed a bug for Python >= 3.8 in numba -sin #5548. - Jim Pivarski added some hints for debugging entry points in #5280. - John Kirkham added numpy.dtypecoercion for the dtypeargument to CUDA device arrays in #5252. - Leo Fang added a list of libraries that support __cuda_array_interface__in #5104. - Lucio Fernandez-Arjona added getitemfor the NumPy record type when the index is a StringLiteraltype in #5182 and improved the documentation rendering via additions to the TOC and removal of numbering in #5450. - Mads R. B. Kristensen fixed an issue with __cuda_array_interface__not requiring the context in #5189. - Marcin Tolysz added support for nested modules in AOT compilation in #5174. - Mike Williams fixed some issues with NumPy records and getitemin the CUDA simulator in #5343. - Pearu Peterson added experimental support for first-class function types in #5287 (and fixes in #5459, #5473/#5429, and #5557). - Ravi Teja Gutta added support for np.flipin #4376/#5313. - Rohit Sanjay fixed an issue with type refinement for unicode input supplied to typed-list extend()(#5295) and fixed unicode .strip()to strip all whitespace characters in #5213. - Vladimir Lukyanov fixed an awkward bug in typed.dictin #5361, added a fix to ensure the LLVM and assembly dumps are highlighted correctly in #5357 and implemented a Numba IR Lexer and added highlighting to Numba IR dumps in #5333. - hdf fixed an issue with the boundscheckflag in the CUDA jit target in #5257. General Enhancements: - PR #4615: Allow masking threads out at runtime - PR #4798: Add branch pruning based on raw predicates. - PR #5115: Add support for iterating over 2D arrays - PR #5117: Implement ord()/chr() - PR #5122: Remove Python 2. - PR #5127: Calling convention adaptor for boxer/unboxer to call jitcode - PR #5151: implement None-typed typed-list - PR #5174: Nested modules - PR #5182: Add getitem for Record type when index is StringLiteral - PR #5185: extract code-gen utilities from closures - PR #5197: Refactor Numba, part I - PR #5210: Remove more unsupported Python versions from build tooling. - PR #5212: Adds support for viewing the CFG of the ELF disassembly. - PR #5227: Immutable typed-list - PR #5231: Added support for np.asarrayto be used with numba.typed.List - PR #5235: Added property dtypeto numba.typed.List - PR #5272: Refactor parfor: split up ParforPass - PR #5281: Make IR ir.Del free until legalized. - PR #5287: First-class function type - PR #5293: np.isnat - PR #5294: Create typed-list from iterable - PR #5295: refine typed-list on unicode input to extend - PR #5296: Refactor parfor: better exception from passes - PR #5308: Provide numba.extending.is_jitted - PR #5320: refactor array_analysis - PR #5325: Let literal_unroll accept types.Named*Tuple - PR #5330: refactor common operation in parfor lowering into a new util - PR #5333: Add: highlight Numba IR dump - PR #5342: Support for tuples passed to parfors. - PR #5348: Add support for inferring the types of object arrays - PR #5351: SSA again - PR #5352: Add shim to accommodate refactoring. - PR #5356: implement allocated parameter in njit - PR #5369: Make test ordering more consistent across feature availability - PR #5428: Wip/deprecate jitclass location - PR #5441: Additional changes to first class function - PR #5455: Move to llvmlite 0.32.* - PR #5457: implement repr for untyped lists Fixes: - PR #4476: Another attempt at fixing frame injection in the dispatcher tracing path - PR #4942: Prevent some parfor aliasing. Rename copied function var to prevent recursive type locking. - PR #5092: Fix 5087 - PR #5150: More explicit llvmlite requirement in requirements.txt - PR #5172: fix version spec for llvmlite - PR #5176: Normalize kws going into fold_arguments. - PR #5183: pass ‘inline’ explicitly to overload - PR #5193: Fix CI failure due to missing files when installed - PR #5213: Fix .strip()to strip all whitespace characters - PR #5216: Fix namedtuple mistreated by dispatcher as simple tuple - PR #5222: Fix compiler warnings due to missing (void) - PR #5232: Fixes a bad import that breaks master - PR #5239: fix test discovery for unittest - PR #5247: Continue PR #5126 - PR #5250: Part fix/5098 - PR #5252: Trivially fix SyntaxWarning - PR #5276: Add prange variant to has_no_side_effect. - PR #5278: fix error messages in test_array_reductions - PR #5310: PR #3897 continued - PR #5313: Continues PR #4376 - PR #5318: Remove AUTHORS file reference from MANIFEST.in - PR #5327: Add warning if FNV hashing is found as the default for CPython. - PR #5338: Remove refcount pruning pass - PR #5345: Disable test failing due to removed pass. - PR #5357: Small fix to have llvm and asm highlighted properly - PR #5361: 5081 typed.dict - PR #5431: Add tolerance to numba extension module entrypoints. - PR #5432: Fix code causing compiler warnings. - PR #5445: Remove undefined variable - PR #5454: Don’t warn for numba.experimental.jitclass - PR #5459: Fixes issue 5448 - PR #5480: Fix for #5477, literal_unroll KeyError searching for getitems - PR #5485: Show the offending module in “no direct replacement” error message - PR #5488: Add missing numba.configshim - PR #5495: Fix missing null initializer for variable after phi strip - PR #5498: Make the shim deprecation warnings work on python 3.6 too - PR #5505: Better error message if __init__ returns value - PR #5527: Attempt to fix #5518 - PR #5529: PR #5473 continued - PR #5532: Make numba.<mod>available without an import - PR #5542: Fixes RC2 module shim bug - PR #5548: Fix #5537 Removed reference to platform.linux_distribution - PR #5555: Fix #5515 by reverting changes to ArrayAnalysis - PR #5557: First-class function call cannot use keyword arguments - PR #5569: Fix RewriteConstGetitems not registering calltype for new expr - PR #5571: Pin down llvmlite requirement CUDA Enhancements/Fixes: - PR #5061: Prevent kernel launch with no configuration, remove autotuner - PR #5085: Support other itemsizes with view - PR #5099: Fix #5073: Slices of dynamic shared memory all alias - PR #5104: Add a list of libraries that support __cuda_array_interface__ - PR #5136: CUDA: Enable asynchronous operations on the default stream - PR #5144: Fix #4875: Make #2655 test with debug expect to pass - PR #5189: __cuda_array_interface__ not requiring context - PR #5253: Coerce dtypeto numpy.dtype - PR #5257: boundscheck fix - PR #5319: Make user facing error string use abs path not rel. - PR #5323: Document lifetime semantics of CUDA Array Interface - PR #5328: NBEP 7: External Memory Management Plugin Interface - PR #5343: Fix cuda spoof - PR #5400: Fix #4954, and some other small CUDA testsuite fixes - PR #5436: Fix #5408: test_set_registers_57 fails on Maxwell - PR #5519: CUDA: Silence the test suite - Fix #4809, remove autojit, delete prints Documentation Updates: - PR #4957: Add notes on overwriting gufunc inputs to docs - PR #5059: Docs: Explain how to use Memcheck with Numba, fixups in CUDA documentation - PR #5106: Add SciPy 2019 talks to docs - PR #5147: Update master for 0.48.0 updates - PR #5155: Explain what inlining at Numba IR level will do - PR #5161: Fix README.rst formatting - PR #5207: Remove AUTHORS list - PR #5249: fix target path for See also - PR #5262: fix typo in inlining docs - PR #5270: fix ‘see also’ in typeddict docs - PR #5280: Added some hints for debugging entry points. - PR #5297: Update docs with intro to {g,}ufuncs. - PR #5326: Update installation docs with OpenMP requirements. - PR #5346: Docs: use sphinx_rtd_theme - PR #5366: Remove reference to Python 2.7 in install check output - PR #5423: Fix #5421: Add notes on printing in CUDA kernels - PR #5438: Update package deps for doc building. - PR #5440: Bump deprecation notices. - PR #5443: Fix #5196: Docs: assert in CUDA only enabled for debug - PR #5450: Docs: remove numbers and add titles to TOC - PR #5453: fix building docs on windows - PR #5489: docs: fix rendering of nested bulleted list CI updates: - PR #5314: Update the image used in Azure CI for OSX. - PR #5360: Remove Travis CI badge. Authors: - Aaron Meurer - Andreas Sodeur - Antonio Russo - Arik Funke - Eric Wieser - Ethan Pronovost - Gabriel Majeri - Graham Markall - Greg Jennings - Guilherme Leobas - hdf - Henry Schreiner - Ivan Butygin - Jacques Gaudin - Jim Pivarski - John Kirkham - Leo Fang - Lucio Fernandez-Arjona - Mads R. B. Kristensen - Marcin Tolysz - Mike Williams - Pearu Peterson - Ravi Teja Gutta - Rohit Sanjay - Siu Kwan Lam (core dev) - Stan Seibert (core dev) - Stuart Archibald (core dev) - Todd A. Anderson (core dev) - Valentin Haenel (core dev) - Vladimir Lukyanov Version 0.48.0 (Jan 27, 2020)¶ This release is particularly small as it was present to catch anything that missed the 0.47.0 deadline (the deadline deliberately coincided with the end of support for Python 2.7). The next release will be considerably larger. The core changes in this release are dominated by the start of the clean up needed for the end of Python 2.7 support, improvements to the CUDA target and support for numerous additional unicode string methods. Enhancements from user contributed PRs (with thanks!): - Brian Wignall fixed more spelling typos in #4998. - Denis Smirnov added support for string methods capitalize(#4823), casefold(#4824), swapcase(#4825), rsplit(#4834), partition(#4845) and splitlines(#4849). - Elena Totmenina extended support for string methods startswith(#4867) and added endswith(#4868). - Eric Wieser made type_callablereturn the decorated function itself in #4760 - Ethan Pronovost added support for np.argwherein #4617 - Graham Markall contributed a large number of CUDA enhancements and fixes, namely: - #5068: Remove Python 3.4 backports from utils - #4975: Make device_array_likecreate contiguous arrays (Fixes #4832) - #5023: Don’t launch ForAll kernels with 0 elements (Fixes #5017) - #5016: Fix various issues in CUDA library search (Fixes #4979) - #5014: Enable use of records and bools for shared memory, remove ddt, add additional transpose tests - #4964: Fix #4628: Add more appropriate typing for CUDA device arrays - #5007: test_consuming_strides: Keep dev array alive - #4997: State that CUDA Toolkit 8.0 required in docs - James Bourbeau added the Python 3.8 classifier to setup.py in #5027. - John Kirkham added a clarification to the __cuda_array_interface__documentation in #5049. - Leo Fang Fixed an indexing problem in dummyarrayin #5012. - Marcel Bargull fixed a build and test issue for Python 3.8 in #5029. - Maria Rubtsov added support for string methods isdecimal(#4842), isdigit(#4843), isnumeric(#4844) and replace(#4865). General Enhancements: PR #4760: Make type_callable return the decorated function PR #5010: merge string prs This merge PR included the following: - PR #4823: Implement str.capitalize() based on CPython - PR #4824: Implement str.casefold() based on CPython - PR #4825: Implement str.swapcase() based on CPython - PR #4834: Implement str.rsplit() based on CPython - PR #4842: Implement str.isdecimal - PR #4843: Implement str.isdigit - PR #4844: Implement str.isnumeric - PR #4845: Implement str.partition() based on CPython - PR #4849: Implement str.splitlines() based on CPython - PR #4865: Implement str.replace - PR #4867: Functionality extension str.startswith() based on CPython - PR #4868: Add functionality for str.endswith() PR #5039: Disable help messages. PR #4617: Add coverage for np.argwhere Fixes: - PR #4724: Only use lives (and not aliases) to create post parfor live set. - PR #4998: Fix more spelling typos - PR #5024: Propagate semantic constants ahead of static rewrites. - PR #5027: Add Python 3.8 classifier to setup.py - PR #5046: Update setup.py and buildscripts for dependency requirements - PR #5053: Convert from arrays to names in define() and don’t invalidate for multiple consistent defines. - PR #5058: Permit mixed int types in wrap_index - PR #5078: Catch the use of global typed-list in JITed functions - PR #5092: Fix #5087, bug in bytecode analysis. CUDA Enhancements/Fixes: - PR #4964: Fix #4628: Add more appropriate typing for CUDA device arrays - PR #4975: Make device_array_likecreate contiguous arrays (Fixes #4832) - PR #4997: State that CUDA Toolkit 8.0 required in docs - PR #5007: test_consuming_strides: Keep dev array alive - PR #5012: Fix IndexError when accessing the “-1” element of dummyarray - PR #5014: Enable use of records and bools for shared memory, remove ddt, add additional transpose tests - PR #5016: Fix various issues in CUDA library search (Fixes #4979) - PR #5023: Don’t launch ForAll kernels with 0 elements (Fixes #5017) - PR #5068: Remove Python 3.4 backports from utils Documentation Updates: - PR #5049: Clarify what dictionary means - PR #5062: Update docs for updated version requirements - PR #5090: Update deprecation notices for 0.48.0 CI updates: - PR #5029: Install optional dependencies for Python 3.8 tests - PR #5040: Drop Py2.7 and Py3.5 from public CI - PR #5048: Fix CI py38 Authors: - Brian Wignall - Denis Smirnov - Elena Totmenina - Eric Wieser - Ethan Pronovost - Graham Markall - James Bourbeau - John Kirkham - Leo Fang - Marcel Bargull - Maria Rubtsov - Siu Kwan Lam (core dev) - Stan Seibert (core dev) - Stuart Archibald (core dev) - Todd A. Anderson (core dev) - Valentin Haenel (core dev) Version 0.47.0 (Jan 2, 2020)¶ This release expands the capability of Numba in a number of important areas and is also significant as it is the last major point release with support for Python 2 and Python 3.5 included. The next release (0.48.0) will be for Python 3.6+ only! (This follows NumPy’s deprecation schedule as specified in NEP 29.) Highlights of core feature changes include: - Full support for Python 3.8 (Siu Kwan Lam) - Opt-in bounds checking (Aaron Meurer) - Support for map, filterand reduce(Stuart Archibald) Intel also kindly sponsored research and development that lead to some exciting new features: - Initial support for basic try/ exceptuse (Siu Kwan Lam) - The ability to pass functions created from closures/lambdas as arguments (Stuart Archibald) sortedand list.sort()now accept the keyargument (Stuart Archibald and Siu Kwan Lam) - A new compiler pass triggered through the use of the function numba.literal_unrollwhich permits iteration over heterogeneous tuples and constant lists of constants. (Stuart Archibald) Enhancements from user contributed PRs (with thanks!): - Ankit Mahato added a reference to a new talk on Numba at PyCon India 2019 in #4862 - Brian Wignall kindly fixed some spelling mistakes and typos in #4909 - Denis Smirnov wrote numerous methods to considerable enhance string support including: str.rindex()in #4861 str.isprintable()in #4836 str.index()in #4860 start/endparameters for str.find()in #4866 str.isspace()in #4835 str.isidentifier()#4837 str.rpartition()in #4841 str.lower()and str.islower()in #4651 - Elena Totmenina implemented both str.isalnum(), str.isalpha()and str.isasciiin #4839, #4840 and #4847 respectively. - Eric Larson fixed a bug in literal comparison in #4710 - Ethan Pronovost updated the np.arangeimplementation in #4770 to allow the use of the dtypekey word argument and also added boolimplementations for several types in #4715. - Graham Markall fixed some issues with the CUDA target, namely: - #4931: Added physical limits for CC 7.0 / 7.5 to CUDA autotune - #4934: Fixed bugs in TestCudaWarpOperations - #4938: Improved errors / warnings for the CUDA vectorize decorator - Guilherme Leobas fixed a typo in the uremimplementation in #4667 - Isaac Virshup contributed a number of patches that fixed bugs, added support for more NumPy functions and enhanced Python feature support. These contributions included: - #4729: Allow array construction with mixed type shape tuples - #4904: Implementing np.lcm - #4780: Implement np.gcd and math.gcd - #4779: Make slice constructor more similar to python. - #4707: Added support for slice.indices - #4578: Clarify numba ufunc supported features - James Bourbeau fixed some issues with tooling, #4794 add setuptoolsas a dependency and #4501 add pre-commit hooks for flake8compliance. - Leo Fang made numba.dummyarray.Arrayiterable in #4629 - Marc Garcia fixed the numba.jitparameter name signature_or_function in #4703 - Marcelo Duarte Trevisani patched the llvmlite requirement to >=0.30.0in #4725 - Matt Cooper fixed a long standing CI problem in #4737 by remove maxParallel - Matti Picus fixed an issue with collections.abcin #4734 from Azure Pipelines. - Rob Ennis patched a bug in np.interp float32handling in #4911 - VDimir fixed a bug in array transposition layouts in #4777 and re-enabled and fixed some idle tests in #4776. - Vyacheslav Smirnov Enable support for str.istitle()` in #4645 General Enhancements: - PR #4432: Bounds checking - PR #4501: Add pre-commit hooks - PR #4536: Handle kw args in inliner when callee is a function - PR #4599: Permits closures to become functions, enables map(), filter() - PR #4611: Implement method title() for unicode based on Cpython - PR #4645: Enable support for istitle() method for unicode string - PR #4651: Implement str.lower() and str.islower() - PR #4652: Implement str.rfind() - PR #4695: Refactor overload* and support jit_options and inline - PR #4707: Added support for slice.indices - PR #4715: Add bool overload for several types - PR #4729: Allow array construction with mixed type shape tuples - PR #4755: Python3.8 support - PR #4756: Add parfor support for ndarray.fill. - PR #4768: Update typeconv error message to ask for sys.executable. - PR #4770: Update np.arange implementation with @overload - PR #4779: Make slice constructor more similar to python. - PR #4780: Implement np.gcd and math.gcd - PR #4794: Add setuptools as a dependency - PR #4802: put git hash into build string - PR #4803: Better compiler error messages for improperly used reduction variables. - PR #4817: Typed list implement and expose allocation - PR #4818: Typed list faster copy - PR #4835: Implement str.isspace() based on CPython - PR #4836: Implement str.isprintable() based on CPython - PR #4837: Implement str.isidentifier() based on CPython - PR #4839: Implement str.isalnum() based on CPython - PR #4840: Implement str.isalpha() based on CPython - PR #4841: Implement str.rpartition() based on CPython - PR #4847: Implement str.isascii() based on CPython - PR #4851: Add graphviz output for FunctionIR - PR #4854: Python3.8 looplifting - PR #4858: Implement str.expandtabs() based on CPython - PR #4860: Implement str.index() based on CPython - PR #4861: Implement str.rindex() based on CPython - PR #4866: Support params start/end for str.find() - PR #4874: Bump to llvmlite 0.31 - PR #4896: Specialise arange dtype on arch + python version. - PR #4902: basic support for try except - PR #4904: Implement np.lcm - PR #4910: loop canonicalisation and type aware tuple unroller/loop body versioning passes - PR #4961: Update hash(tuple) for Python 3.8. - PR #4977: Implement sort/sorted with key. - PR #4987: Add is_internal property to all Type classes. Fixes: - PR #4090: Update to LLVM8 memset/memcpy intrinsic - PR #4582: Convert sub to add and div to mul when doing the reduction across the per-thread reduction array. - PR #4648: Handle 0 correctly as slice parameter. - PR #4660: Remove multiply defined variables from all blocks’ equivalence sets. - PR #4672: Fix pickling of dufunc - PR #4710: BUG: Comparison for literal - PR #4718: Change get_call_table to support intermediate Vars. - PR #4725: Requires llvmlite >=0.30.0 - PR #4734: prefer to import from collections.abc - PR #4736: fix flake8 errors - PR #4776: Fix and enable idle tests from test_array_manipulation - PR #4777: Fix transpose output array layout - PR #4782: Fix issue with SVML (and knock-on function resolution effects). - PR #4785: Treat 0d arrays like scalars. - PR #4787: fix missing incref on flags - PR #4789: fix typos in numba/targets/base.py - PR #4791: fix typos - PR #4811: fix spelling in now-failing tests - PR #4852: windowing test should check equality only up to double precision errors - PR #4881: fix refining list by using extend on an iterator - PR #4882: Fix return type in arange and zero step size handling. - PR #4885: suppress spurious RuntimeWarning about ufunc sizes - PR #4891: skip the xfail test for now. Py3.8 CFG refactor seems to have changed the test case - PR #4892: regex needs to accept singular form of “argument” - PR #4901: fix typed list equals - PR #4909: Fix some spelling typos - PR #4911: np.interp bugfix for float32 handling - PR #4920: fix creating list with JIT disabled - PR #4921: fix creating dict with JIT disabled - PR #4935: Better handling of prange with multiple reductions on the same variable. - PR #4946: Improve the error message for raise <string>. - PR #4955: Move overload of literal_unroll to avoid circular dependency that breaks Python 2.7 - PR #4962: Fix test error on windows - PR #4973: Fixes a bug in the relabelling logic in literal_unroll. - PR #4978: Fix overload_method problem with stararg - PR #4981: Add ind_to_const to enable fewer equivalence classes. - PR #4991: Continuation of #4588 (Let dead code removal handle removing more of the unneeded code after prange conversion to parfor) - PR #4994: Remove xfail for test which has since had underlying issue fixed. - PR #5018: Fix #5011. - PR #5019: skip pycc test on Python 3.8 + macOS because of distutils issue CUDA Enhancements/Fixes: - PR #4629: Make numba.dummyarray.Array iterable - PR #4675: Bump cuda array interface to version 2 - PR #4741: Update choosing the “CUDA_PATH” for windows - PR #4838: Permit ravel(‘A’) for contig device arrays in CUDA target - PR #4931: Add physical limits for CC 7.0 / 7.5 to autotune - PR #4934: Fix fails in TestCudaWarpOperations - PR #4938: Improve errors / warnings for cuda vectorize decorator Documentation Updates: - PR #4418: Directed graph task roadmap - PR #4578: Clarify numba ufunc supported features - PR #4655: fix sphinx build warning - PR #4667: Fix typo on urem implementation - PR #4669: Add link to ParallelAccelerator paper. - PR #4703: Fix numba.jit parameter name signature_or_function - PR #4862: Addition of PyCon India 2019 talk on Numba - PR #4947: Document jitclass with numba.typed use. - PR #4958: Add docs for try..except - PR #4993: Update deprecations for 0.47 CI Updates: - PR #4737: remove maxParallel from Azure Pipelines - PR #4767: pin to 2.7.16 for py27 on osx - PR #4781: WIP/runtest cf pytest Authors: - Aaron Meurer - Ankit Mahato - Brian Wignall - Denis Smirnov - Ehsan Totoni (core dev) - Elena Totmenina - Eric Larson - Ethan Pronovost - Giovanni Cavallin - Graham Markall - Guilherme Leobas - Isaac Virshup - James Bourbeau - Leo Fang - Marc Garcia - Marcelo Duarte Trevisani - Matt Cooper - Matti Picus - Rob Ennis - Rujal Desai - Siu Kwan Lam (core dev) - Stan Seibert (core dev) - Stuart Archibald (core dev) - Todd A. Anderson (core dev) - VDimir - Valentin Haenel (core dev) - Vyacheslav Smirnov Version 0.46.0¶ This release significantly reworked one of the main parts of Numba, the compiler pipeline, to make it more extensible and easier to use. The purpose of this was to continue enhancing Numba’s ability for use as a compiler toolkit. In a similar vein, Numba now has an extension registration mechanism to allow other Numba-using projects to automatically have their Numba JIT compilable functions discovered. There were also a number of other related compiler toolkit enhancement added along with some more NumPy features and a lot of bug fixes. This release has updated the CUDA Array Interface specification to version 2, which clarifies the strides attribute for C-contiguous arrays and specifies the treatment for zero-size arrays. The implementation in Numba has been changed and may affect downstream packages relying on the old behavior (see issue #4661). Enhancements from user contributed PRs (with thanks!): - Aaron Meurer fixed some Python issues in the code base in #4345 and #4341. - Ashwin Srinath fixed a CUDA performance bug via #4576. - Ethan Pronovost added support for triangular indices functions in #4601 (the NumPy functions tril_indices, tril_indices_from, triu_indices, and triu_indices_from). - Gerald Dalley fixed a tear down race occurring in Python 2. - Gregory R. Lee fixed the use of deprecated inspect.getargspec. - Guilherme Leobas contributed five PRs, adding support for np.appendand np.count_nonzeroin #4518 and #4386. The typed List was fixed to accept unsigned integers in #4510. #4463 made a fix to NamedTuple internals and #4397 updated the docs for np.sum. - James Bourbeau added a new feature to permit the automatic application of the jit decorator to a whole module in #4331. Also some small fixes to the docs and the code base were made in #4447 and #4433, and a fix to inplace array operation in #4228. - Jim Crist fixed a bug in the rendering of patched errors in #4464. - Leo Fang updated the CUDA Array Interface contract in #4609. - Pearu Peterson added support for Unicode based NumPy arrays in #4425. - Peter Andreas Entschev fixed a CUDA concurrency bug in #4581. - Lucio Fernandez-Arjona extended Numba’s np.sumsupport to now accept the dtypekwarg in #4472. - Pedro A. Morales Maries added support for np.crossin #4128 and also added the necessary extension numba.numpy_extensions.cross2din #4595. - David Hoese, Eric Firing, Joshua Adelman, and Juan Nunez-Iglesias all made documentation fixes in #4565, #4482, #4455, #4375 respectively. - Vyacheslav Smirnov and Rujal Desai enabled support for count()on unicode strings in #4606. General Enhancements: - PR #4113: Add rewrite for semantic constants. - PR #4128: Add np.cross support - PR #4162: Make IR comparable and legalize it. - PR #4208: R&D inlining, jitted and overloaded. - PR #4331: Automatic JIT of called functions - PR #4353: Inspection tool to check what numba supports - PR #4386: Implement np.count_nonzero - PR #4425: Unicode array support - PR #4427: Entrypoints for numba extensions - PR #4467: Literal dispatch - PR #4472: Allow dtype input argument in np.sum - PR #4513: New compiler. - PR #4518: add support for np.append - PR #4554: Refactor NRT C-API - PR #4556: 0.46 scheduled deprecations - PR #4567: Add env var to disable performance warnings. - PR #4568: add np.array_equal support - PR #4595: Implement numba.cross2d - PR #4601: Add triangular indices functions - PR #4606: Enable support for count() method for unicode string Fixes: - PR #4228: Fix inplace operator error for arrays - PR #4282: Detect and raise unsupported on generator expressions - PR #4305: Don’t allow the allocation of mutable objects written into a container to be hoisted. - PR #4311: Avoid deprecated use of inspect.getargspec - PR #4328: Replace GC macro with function call - PR #4330: Loosen up typed container casting checks - PR #4341: Fix some coding lines at the top of some files (utf8 -> utf-8) - PR #4345: Replace “import *” with explicit imports in numba/types - PR #4346: Fix incorrect alg in isupper for ascii strings. - PR #4349: test using jitclass in typed-list - PR #4361: Add allocation hoisting info to LICM section at diagnostic L4 - PR #4366: Offset search box to avoid wrapping on some pages with Safari. Fixes #4365. - PR #4372: Replace all “except BaseException” with “except Exception”. - PR #4407: Restore the “free” conda channel for NumPy 1.10 support. - PR #4408: Add lowering for constant bytes. - PR #4409: Add exception chaining for better error context - PR #4411: Name of type should not contain user facing description for debug. - PR #4412: Fix #4387. Limit the number of return types for recursive functions - PR #4426: Fixed two module teardown races in py2. - PR #4431: Fix and test numpy.random.random_sample(n) for np117 - PR #4463: NamedTuple - Raises an error on non-iterable elements - PR #4464: Add a newline in patched errors - PR #4474: Fix liveness for remove dead of parfors (and other IR extensions) - PR #4510: Make List.__getitem__ accept unsigned parameters - PR #4512: Raise specific error at typing time for iteration on >1D array. - PR #4532: Fix static_getitem with Literal type as index - PR #4547: Update to inliner cost model information. - PR #4557: Use specific random number seed when generating arbitrary test data - PR #4559: Adjust test timeouts - PR #4564: Skip unicode array tests on ppc64le that trigger an LLVM bug - PR #4621: Fix packaging issue due to missing numba/cext - PR #4623: Fix issue 4520 due to storage model mismatch - PR #4644: Updates for llvmlite 0.30.0 CUDA Enhancements/Fixes: - PR #4410: Fix #4111. cudasim mishandling recarray - PR #4576: Replace use of np.prod with functools.reduce for computing size from shape - PR #4581: Prevent taking the GIL in ForAll - PR #4592: Fix #4589. Just pass NULL for b2d_func for constant dynamic sharedmem - PR #4609: Update CUDA Array Interface & Enforce Numba compliance - PR #4619: Implement math.{degrees, radians} for the CUDA target. - PR #4675: Bump cuda array interface to version 2 Documentation Updates: - PR #4317: Add docs for ARMv8/AArch64 - PR #4318: Add supported platforms to the docs. Closes #4316 - PR #4375: Add docstrings to inspect methods - PR #4388: Update Python 2.7 EOL statement - PR #4397: Add note about np.sum - PR #4447: Minor parallel performance tips edits - PR #4455: Clarify docs for typed dict with regard to arrays - PR #4482: Fix example in guvectorize docstring. - PR #4541: fix two typos in architecture.rst - PR #4548: Document numba.extending.intrinsic and inlining. - PR #4565: Fix typo in jit-compilation docs - PR #4607: add dependency list to docs - PR #4614: Add documentation for implementing new compiler passes. CI Updates: - PR #4415: Make 32bit incremental builds on linux not use free channel - PR #4433: Removes stale azure comment - PR #4493: Fix Overload Inliner wrt CUDA Intrinsics - PR #4593: Enable Azure CI batching Contributors: - Aaron Meurer - Ashwin Srinath - David Hoese - Ehsan Totoni (core dev) - Eric Firing - Ethan Pronovost - Gerald Dalley - Gregory R. Lee - Guilherme Leobas - James Bourbeau - Jim Crist - Joshua Adelman - Juan Nunez-Iglesias - Leo Fang - Lucio Fernandez-Arjona - Pearu Peterson - Pedro A. Morales Marie - Peter Andreas Entschev - Rujal Desai - Siu Kwan Lam (core dev) - Stan Seibert (core dev) - Stuart Archibald (core dev) - Todd A. Anderson (core dev) - Valentin Haenel (core dev) - Vyacheslav Smirnov Version 0.45.1¶ This patch release addresses some regressions reported in the 0.45.0 release and adds support for NumPy 1.17: - PR #4325: accept scalar/0d-arrays - PR #4338: Fix #4299. Parfors reduction vars not deleted. - PR #4350: Use process level locks for fork() only. - PR #4354: Try to fix #4352. - PR #4357: Fix np1.17 isnan, isinf, isfinite ufuncs - PR #4363: Fix np.interp for np1.17 nan handling - PR #4371: Fix nump1.17 random function non-aliasing Contributors: - Siu Kwan Lam (core dev) - Stuart Archibald (core dev) - Valentin Haenel (core dev) Version 0.45.0¶ In this release, Numba gained an experimental numba.typed.List container as a future replacement of the reflected list. In addition, functions decorated with parallel=True can now be cached to reduce compilation overhead associated with the auto-parallelization. Enhancements from user contributed PRs (with thanks!): - James Bourbeau added the Numba version to reportable error messages in #4227, added the signatureparameter to inspect_typesin #4200, improved the docstring of normalize_signaturein #4205, and fixed #3658 by adding reference counting to register_dispatcherin #4254 - Guilherme Leobas implemented the dominator tree and dominance frontier algorithms in #4216 and #4149, respectively. - Nick White fixed the issue with roundin the CUDA target in #4137. - Joshua Adelman added support for determining if a value is in a range (i.e. x in range(...)) in #4129, and added windowing functions ( np.bartlett, np.hamming, np.blackman, np.hanning, np.kaiser) from NumPy in #4076. - Lucio Fernandez-Arjona added support for np.selectin #4077 - Rob Ennis added support for np.flatnonzeroin #4157 - Keith Kraus extended the __cuda_array_interface__with an optional mask attribute in #4199. - Gregory R. Lee replaced deprecated use of inspect.getargspecin #4311. General Enhancements: - PR #4328: Replace GC macro with function call - PR #4311: Avoid deprecated use of inspect.getargspec - PR #4296: Slacken window function testing tol on ppc64le - PR #4254: Add reference counting to register_dispatcher - PR #4239: Support len() of multi-dim arrays in array analysis - PR #4234: Raise informative error for np.kron array order - PR #4232: Add unicodetype db, low level str functions and examples. - PR #4229: Make hashing cacheable - PR #4227: Include numba version in reportable error message - PR #4216: Add dominator tree - PR #4200: Add signature parameter to inspect_types - PR #4196: Catch missing imports of internal functions. - PR #4180: Update use of unlowerable global message. - PR #4166: Add tests for PR #4149 - PR #4157: Support for np.flatnonzero - PR #4149: Implement dominance frontier for SSA for the Numba IR - PR #4148: Call branch pruning in inline_closure_call() - PR #4132: Reduce usage of inttoptr - PR #4129: Support contains for range - PR #4112: better error messages for np.transpose and tuples - PR #4110: Add range attrs, start, stop, step - PR #4077: Add np select - PR #4076: Add numpy windowing functions support (np.bartlett, np.hamming, np.blackman, np.hanning, np.kaiser) - PR #4095: Support ir.Global/FreeVar in find_const() - PR #3691: Make TypingError abort compiling earlier - PR #3646: Log internal errors encountered in typeinfer Fixes: - PR #4303: Work around scipy bug 10206 - PR #4302: Fix flake8 issue on master - PR #4301: Fix integer literal bug in np.select impl - PR #4291: Fix pickling of jitclass type - PR #4262: Resolves #4251 - Fix bug in reshape analysis. - PR #4233: Fixes issue revealed by #4215 - PR #4224: Fix #4223. Looplifting error due to StaticSetItem in objectmode - PR #4222: Fix bad python path. - PR #4178: Fix unary operator overload, check with unicode impl - PR #4173: Fix return type in np.bincount with weights - PR #4153: Fix slice shape assignment in array analysis - PR #4152: fix status check in dict lookup - PR #4145: Use callable instead of checking __module__ - PR #4118: Fix inline assembly support on CPU. - PR #4088: Resolves #4075 - parfors array_analysis bug. - PR #4085: Resolves #3314 - parfors array_analysis bug with reshape. CUDA Enhancements/Fixes: - PR #4199: Extend __cuda_array_interface__ with optional mask attribute, bump version to 1 - PR #4137: CUDA - Fix round Builtin - PR #4114: Support 3rd party activated CUDA context Documentation Updates: - PR #4317: Add docs for ARMv8/AArch64 - PR #4318: Add supported platforms to the docs. Closes #4316 - PR #4295: Alter deprecation schedules - PR #4253: fix typo in pysupported docs - PR #4252: fix typo on repomap - PR #4241: remove unused import - PR #4240: fix typo in jitclass docs - PR #4205: Update return value order in normalize_signature docstring - PR #4237: Update doc links to point to latest not dev docs. - PR #4197: hyperlink repomap - PR #4170: Clarify docs on accumulating into arrays in prange - PR #4147: fix docstring for DictType iterables - PR #3951: A guide to overloading CI Updates: - PR #4300: AArch64 has no faulthandler package - PR #4273: pin to MKL BLAS for testing to get consistent results - PR #4209: Revert previous network tol patch and try with conda config - PR #4138: Remove tbb before Azure test only on Python 3, since it was already removed for Python 2 Contributors: - Ehsan Totoni (core dev) - Gregory R. Lee - Guilherme Leobas - James Bourbeau - Joshua L. Adelman - Keith Kraus - Lucio Fernandez-Arjona - Nick White - Rob Ennis - Siu Kwan Lam (core dev) - Stan Seibert (core dev) - Stuart Archibald (core dev) - Todd A. Anderson (core dev) - Valentin Haenel (core dev) Version 0.44.1¶ This patch release addresses some regressions reported in the 0.44.0 release: - PR #4165: Fix #4164 issue with NUMBAPRO_NVVM. - PR #4172: Abandon branch pruning if an arg name is redefined. (Fixes #4163) - PR #4183: Fix #4156. Problem with defining in-loop variables. Version 0.44.0¶ IMPORTANT: In this release a few significant deprecations (and some less significant ones) are being made, users are encouraged to read the related documentation. General enhancements in this release include: - Numba is backed by LLVM 8 on all platforms apart from ppc64le, which, due to bugs, remains on the LLVM 7.x series. - Numba’s dictionary support now includes type inference for keys and values. - The .view() method now works for NumPy scalar types. - Newly supported NumPy functions added: np.delete, np.nanquantile, np.quantile, np.repeat, np.shape. In addition considerable effort has been made to fix some long standing bugs and a large number of other bugs, the “Fixes” section is very large this time! Enhancements from user contributed PRs (with thanks!): - Max Bolingbroke added support for the selective use of fastmathflags in #3847. - Rob Ennis made min() and max() work on iterables in #3820 and added np.quantile and np.nanquantile in #3899. - Sergey Shalnov added numerous unicode string related features, zfill in #3978, ljust in #4001, rjust and center in #4044 and strip, lstrip and rstrip in #4048. - Guilherme Leobas added support for np.delete in #3890 - Christoph Deil exposed the Numba CLI via python -m numbain #4066 and made numerous documentation fixes. - Leo Schwarz wrote the bulk of the code for jitclass default constructor arguments in #3852. - Nick White enhanced the CUDA backend to use min/max PTX instructions where possible in #4054. - Lucio Fernandez-Arjona implemented the unicode string __mul__function in #3952. - Dimitri Vorona wrote the bulk of the code to implement getitem and setitem for jitclass in #3861. General Enhancements: - PR #3820: Min max on iterables - PR #3842: Unicode type iteration - PR #3847: Allow fine-grained control of fastmath flags to partially address #2923 - PR #3852: Continuation of PR #2894 - PR #3861: Continuation of PR #3730 - PR #3890: Add support for np.delete - PR #3899: Support for np.quantile and np.nanquantile - PR #3900: Fix 3457 :: Implements np.repeat - PR #3928: Add .view() method for NumPy scalars - PR #3939: Update icc_rt clone recipe. - PR #3952: __mul__ for strings, initial implementation and tests - PR #3956: Type-inferred dictionary - PR #3959: Create a view for string slicing to avoid extra allocations - PR #3978: zfill operation implementation - PR #4001: ljust operation implementation - PR #4010: Support dict() and {} - PR #4022: Support for llvm 8 - PR #4034: Make type.Optional str more representative - PR #4041: Deprecation warnings - PR #4044: rjust and center operations implementation - PR #4048: strip, lstrip and rstrip operations implementation - PR #4066: Expose numba CLI via python -m numba - PR #4081: Impl np.shape and support function for asarray. - PR #4091: Deprecate the use of iternext_impl without RefType CUDA Enhancements/Fixes: - PR #3933: Adds .nbytes property to CUDA device array objects. - PR #4011: Add .inspect_ptx() to cuda device function - PR #4054: CUDA: Use min/max PTX Instructions - PR #4096: Update env-vars for CUDA libraries lookup Documentation Updates: - PR #3867: Code repository map - PR #3918: adding Joris’ Fosdem 2019 presentation - PR #3926: order talks on applications of Numba by date - PR #3943: fix two small typos in vectorize docs - PR #3944: Fixup jitclass docs - PR #3990: mention preprint repo in FAQ. Fixes #3981 - PR #4012: Correct runtests command in contributing.rst - PR #4043: fix typo - PR #4047: Ambiguous Documentation fix for guvectorize. - PR #4060: Remove remaining mentions of autojit in docs - PR #4063: Fix annotate example in docstring - PR #4065: Add FAQ entry explaining Numba project name - PR #4079: Add Documentation for atomicity of typed.Dict - PR #4105: Remove info about CUDA ENVVAR potential replacement Fixes: - PR #3719: Resolves issue #3528. Adds support for slices when not using parallel=True. - PR #3727: Remove dels for known dead vars. - PR #3845: Fix mutable flag transmission in .astype - PR #3853: Fix some minor issues in the C source. - PR #3862: Correct boolean reinterpretation of data - PR #3863: Comments out the appveyor badge - PR #3869: fixes flake8 after merge - PR #3871: Add assert to ir.py to help enforce correct structuring - PR #3881: fix preparfor dtype transform for datetime64 - PR #3884: Prevent mutation of objmode fallback IR. - PR #3885: Updates for llvmlite 0.29 - PR #3886: Use safe_load from pyyaml. - PR #3887: Add tolerance to network errors by permitting conda to retry - PR #3893: Fix casting in namedtuple ctor. - PR #3894: Fix array inliner for multiple array definition. - PR #3905: Cherrypick #3903 to main - PR #3920: Raise better error if unsupported jump opcode found. - PR #3927: Apply flake8 to the numpy related files - PR #3935: Silence DeprecationWarning - PR #3938: Better error message for unknown opcode - PR #3941: Fix typing of ufuncs in parfor conversion - PR #3946: Return variable renaming dict from inline_closurecall - PR #3962: Fix bug in alignment computation of Record.make_c_struct - PR #3967: Fix error with pickling unicode - PR #3964: Unicode split algo versioning - PR #3975: Add handler for unknown locale to numba -s - PR #3991: Permit Optionals in ufunc machinery - PR #3995: Remove assert in type inference causing poor error message. - PR #3996: add is_ascii flag to UnicodeType - PR #4009: Prevent zero division error in np.linalg.cond - PR #4014: Resolves #4007. - PR #4021: Add a more specific error message for invalid write to a global. - PR #4023: Fix handling of titles in record dtype - PR #4024: Do a check if a call is const before saying that an object is multiply defined. - PR #4027: Fix issue #4020. Turn off no_cpython_wrapper flag when compiling for… - PR #4033: [WIP] Fixing wrong dtype of array inside reflected list #4028 - PR #4061: Change IPython cache dir name to numba_cache - PR #4067: Delete examples/notebooks/LinearRegr.py - PR #4070: Catch writes to global typed.Dict and raise. - PR #4078: Check tuple length - PR #4084: Fix missing incref on optional return None - PR #4089: Make the warnings fixer flush work for warning comparing on type. - PR #4094: Fix function definition finding logic for commented def - PR #4100: Fix alignment check on 32-bit. - PR #4104: Use PEP 508 compliant env markers for install deps Contributors: - Benjamin Zaitlen - Christoph Deil - David Hirschfeld - Dimitri Vorona - Ehsan Totoni (core dev) - Guilherme Leobas - Leo Schwarz - Lucio Fernandez-Arjona - Max Bolingbroke - NanduTej - Nick White - Ravi Teja Gutta - Rob Ennis - Sergey Shalnov - Siu Kwan Lam (core dev) - Stan Seibert (core dev) - Stuart Archibald (core dev) - Todd A. Anderson (core dev) - Valentin Haenel (core dev) Version 0.43.1¶ This is a bugfix release that provides minor changes to fix: a bug in branch pruning, bugs in np.interp functionality, and also fully accommodate the NumPy 1.16 release series. - PR #3826: NumPy 1.16 support - PR #3850: Refactor np.interp - PR #3883: Rewrite pruned conditionals as their evaluated constants. Contributors: - Rob Ennis - Siu Kwan Lam (core dev) - Stuart Archibald (core dev) Version 0.43.0¶ In this release, the major new features are: - Initial support for statically typed dictionaries - Improvements to hash() to match Python 3 behavior - Support for the heapq module - Ability to pass C structs to Numba - More NumPy functions: asarray, trapz, roll, ptp, extract NOTE: The vast majority of NumPy 1.16 behaviour is supported, however datetime and timedelta use involving NaT matches the behaviour present in earlier release. The ufunc suite has not been extending to accommodate the two new time computation related additions present in NumPy 1.16. In addition the functions ediff1d and interp have known minor issues in replicating outputs exactly when NaN’s occur in certain input patterns. General Enhancements: - PR #3563: Support for np.roll - PR #3572: Support for np.ptp - PR #3592: Add dead branch prune before type inference. - PR #3598: Implement np.asarray() - PR #3604: Support for np.interp - PR #3607: Some simplication to lowering - PR #3612: Exact match flag in dispatcher - PR #3627: Support for np.trapz - PR #3630: np.where with broadcasting - PR #3633: Support for np.extract - PR #3657: np.max, np.min, np.nanmax, np.nanmin - support for complex dtypes - PR #3661: Access C Struct as Numpy Structured Array - PR #3678: Support for str.split and str.join - PR #3684: Support C array in C struct - PR #3696: Add intrinsic to help debug refcount - PR #3703: Implementations of type hashing. - PR #3715: Port CPython3.7 dictionary for numba internal use - PR #3716: Support inplace concat of strings - PR #3718: Add location to ConstantInferenceError exceptions. - PR #3720: improve error msg about invalid signature - PR #3731: Support for heapq - PR #3754: Updates for llvmlite 0.28 - PR #3760: Overloadable operator.setitem - PR #3775: Support overloading operator.delitem - PR #3777: Implement compiler support for dictionary - PR #3791: Implement interpreter-side interface for numba dict - PR #3799: Support refcount’ed types in numba dict CUDA Enhancements/Fixes: - PR #3713: Fix the NvvmSupportError message when CC too low - PR #3722: Fix #3705: slicing error with negative strides - PR #3755: Make cuda.to_device accept readonly host array - PR #3773: Adapt library search to accommodate multiple locations Documentation Updates: - PR #3651: fix link to berryconda in docs - PR #3668: Add Azure Pipelines build badge - PR #3749: DOC: Clarify when prange is different from range - PR #3771: fix a few typos - PR #3785: Clarify use of range as function only. - PR #3829: Add docs for typed-dict Fixes: - PR #3614: Resolve #3586 - PR #3618: Skip gdb tests on ARM. - PR #3643: Remove support_literals usage - PR #3645: Enforce and fix that AbstractTemplate.generic must be returning a Signature - PR #3648: Fail on @overload signature mismatch. - PR #3660: Added Ignore message to test numba.tests.test_lists.TestLists.test_mul_error - PR #3662: Replace six with numba.six - PR #3663: Removes coverage computation from travisci builds - PR #3672: Avoid leaking memory when iterating over uniform tuple - PR #3676: Fixes constant string lowering inside tuples - PR #3677: Ensure all referenced compiled functions are linked properly - PR #3692: Fix test failure due to overly strict test on floating point values. - PR #3693: Intercept failed import to help users. - PR #3694: Fix memory leak in enumerate iterator - PR #3695: Convert return of None from intrinsic implementation to dummy value - PR #3697: Fix for issue #3687 - PR #3701: Fix array.T analysis (fixes #3700) - PR #3704: Fixes for overload_method - PR #3706: Don’t push call vars recursively into nested parfors. Resolves #3686. - PR #3710: Set as non-hoistable if a mutable variable is passed to a function in a loop. Resolves #3699. - PR #3712: parallel=True to use better builtin mechanism to resolve call types. Resolves issue #3671 - PR #3725: Fix invalid removal of dead empty list - PR #3740: add uintp as a valid type to the tuple operator.getitem - PR #3758: Fix target definition update in inlining - PR #3782: Raise typing error on yield optional. - PR #3792: Fix non-module object used as the module of a function. - PR #3800: Bugfix for np.interp - PR #3808: Bump macro to include VS2014 to fix py3.5 build - PR #3809: Add debug guard to debug only C function. - PR #3816: Fix array.sum(axis) 1d input return type. - PR #3821: Replace PySys_WriteStdout with PySys_FormatStdout to ensure no truncation. - PR #3830: Getitem should not return optional type - PR #3832: Handle single string as path in find_file() Contributors: - Ehsan Totoni - Gryllos Prokopis - Jonathan J. Helmus - Kayla Ngan - lalitparate - luk-f-a - Matyt - Max Bolingbroke - Michael Seifert - Rob Ennis - Siu Kwan Lam - Stan Seibert - Stuart Archibald - Todd A. Anderson - Tao He - Valentin Haenel Version 0.42.1¶ Bugfix release to fix the incorrect hash in OSX wheel packages. No change in source code. Version 0.42.0¶ In this release the major features are: - The capability to launch and attach the GDB debugger from within a jitted function. - The upgrading of LLVM to version 7.0.0. We added a draft of the project roadmap to the developer manual. The roadmap is for informational purposes only as priorities and resources may change. Here are some enhancements from contributed PRs: - #3532. Daniel Wennberg improved the cuda.{pinned, mapped}API so that the associated memory is released immediately at the exit of the context manager. - #3531. Dimitri Vorona enabled the inlining of jitclass methods. - #3516. Simon Perkins added the support for passing numpy dtypes (i.e. np.dtype("int32")) and their type constructor (i.e. np.int32) into a jitted function. - #3509. Rob Ennis added support for np.corrcoef. A regression issue (#3554, #3461) relating to making an empty slice in parallel mode is resolved by #3558. General Enhancements: - PR #3392: Launch and attach gdb directly from Numba. - PR #3437: Changes to accommodate LLVM 7.0.x - PR #3509: Support for np.corrcoef - PR #3516: Typeof dtype values - PR #3520: Fix @stencil ignoring cval if out kwarg supplied. - PR #3531: Fix jitclass method inlining and avoid unnecessary increfs - PR #3538: Avoid future C-level assertion error due to invalid visibility - PR #3543: Avoid implementation error being hidden by the try-except - PR #3544: Add long_running test flag and feature to exclude tests. - PR #3549: ParallelAccelerator caching improvements - PR #3558: Fixes array analysis for inplace binary operators. - PR #3566: Skip alignment tests on armv7l. - PR #3567: Fix unifying literal types in namedtuple - PR #3576: Add special copy routine for NumPy out arrays - PR #3577: Fix example and docs typos for objmode context manager. reorder statements. - PR #3580: Use alias information when determining whether it is safe to - PR #3583: Use ir.unknown_loc for unknown Loc, as #3390 with tests - PR #3587: Fix llvm.memset usage changes in llvm7 - PR #3596: Fix Array Analysis for Global Namedtuples - PR #3597: Warn users if threading backend init unsafe. - PR #3605: Add guard for writing to read only arrays from ufunc calls - PR #3606: Improve the accuracy of error message wording for undefined type. - PR #3611: gdb test guard needs to ack ptrace permissions - PR #3616: Skip gdb tests on ARM. CUDA Enhancements: - PR #3532: Unregister temporarily pinned host arrays at once - PR #3552: Handle broadcast arrays correctly in host->device transfer. - PR #3578: Align cuda and cuda simulator kwarg names. Documentation Updates: - PR #3545: Fix @njit description in 5 min guide - PR #3570: Minor documentation fixes for numba.cuda - PR #3581: Fixing minor typo in reference/types.rst - PR #3594: Changing @stencil docs to correctly reflect func_or_mode param - PR #3617: Draft roadmap as of Dec 2018 Contributors: - Aaron Critchley - Daniel Wennberg - Dimitri Vorona - Dominik Stańczak - Ehsan Totoni (core dev) - Iskander Sharipov - Rob Ennis - Simon Muller - Simon Perkins - Siu Kwan Lam (core dev) - Stan Seibert (core dev) - Stuart Archibald (core dev) - Todd A. Anderson (core dev) Version 0.41.0¶ This release adds the following major features: - Diagnostics showing the optimizations done by ParallelAccelerator - Support for profiling Numba-compiled functions in Intel VTune - Additional NumPy functions: partition, nancumsum, nancumprod, ediff1d, cov, conj, conjugate, tri, tril, triu - Initial support for Python 3 Unicode strings General Enhancements: - PR #1968: armv7 support - PR #2983: invert mapping b/w binop operators and the operator module #2297 - PR #3160: First attempt at parallel diagnostics - PR #3307: Adding NUMBA_ENABLE_PROFILING envvar, enabling jit event - PR #3320: Support for np.partition - PR #3324: Support for np.nancumsum and np.nancumprod - PR #3325: Add location information to exceptions. - PR #3337: Support for np.ediff1d - PR #3345: Support for np.cov - PR #3348: Support user pipeline class in with lifting - PR #3363: string support - PR #3373: Improve error message for empty imprecise lists. - PR #3375: Enable overload(operator.getitem) - PR #3402: Support negative indexing in tuple. - PR #3414: Refactor Const type - PR #3416: Optimized usage of alloca out of the loop - PR #3424: Updates for llvmlite 0.26 - PR #3462: Add support for np.conj/np.conjugate. - PR #3480: np.tri, np.tril, np.triu - default optional args - PR #3481: Permit dtype argument as sole kwarg in np.eye CUDA Enhancements: - PR #3399: Add max_registers Option to cuda.jit Continuous Integration / Testing: - PR #3303: CI with Azure Pipelines - PR #3309: Workaround race condition with apt - PR #3371: Fix issues with Azure Pipelines - PR #3362: Fix #3360: RuntimeWarning: ‘numba.runtests’ found in sys.modules - PR #3374: Disable openmp in wheel building - PR #3404: Azure Pipelines templates - PR #3419: Fix cuda tests and error reporting in test discovery - PR #3491: Prevent faulthandler installation on armv7l - PR #3493: Fix CUDA test that used negative indexing behaviour that’s fixed. - PR #3495: Start Flake8 checking of Numba source Fixes: - PR #2950: Fix dispatcher to only consider contiguous-ness. - PR #3124: Fix 3119, raise for 0d arrays in reductions - PR #3228: Reduce redundant module linking - PR #3329: Fix AOT on windows. - PR #3335: Fix memory management of __cuda_array_interface__ views. - PR #3340: Fix typo in error name. - PR #3365: Fix the default unboxing logic - PR #3367: Allow non-global reference to objmode() context-manager - PR #3381: Fix global reference in objmode for dynamically created function - PR #3382: CUDA_ERROR_MISALIGNED_ADDRESS Using Multiple Const Arrays - PR #3384: Correctly handle very old versions of colorama - PR #3394: Add 32bit package guard for non-32bit installs - PR #3397: Fix with-objmode warning - PR #3403 Fix label offset in call inline after parfor pass - PR #3429: Fixes raising of user defined exceptions for exec(<string>). - PR #3432: Fix error due to function naming in CI in py2.7 - PR #3444: Fixed TBB’s single thread execution and test added for #3440 - PR #3449: Allow matching non-array objects in find_callname() - PR #3455: Change getiter and iternext to not be pure. Resolves #3425 - PR #3467: Make ir.UndefinedType singleton class. - PR #3478: Fix np.random.shuffle sideeffect - PR #3487: Raise unsupported for kwargs given to print() - PR #3488: Remove dead script. - PR #3498: Fix stencil support for boolean as return type - PR #3511: Fix handling make_function literals (regression of #3414) - PR #3514: Add missing unicode != unicode - PR #3527: Fix complex math sqrt implementation for large -ve values - PR #3530: This adds arg an check for the pattern supplied to Parfors. - PR #3536: Sets list dtor linkage to linkonce_odr to fix visibility in AOT Documentation Updates: - PR #3316: Update 0.40 changelog with additional PRs - PR #3318: Tweak spacing to avoid search box wrapping onto second line - PR #3321: Add note about memory leaks with exceptions to docs. Fixes #3263 - PR #3322: Add FAQ on CUDA + fork issue. Fixes #3315. - PR #3343: Update docs for argsort, kind kwarg partially supported. - PR #3357: Added mention of njit in 5minguide.rst - PR #3434: Fix parallel reduction example in docs. - PR #3452: Fix broken link and mark up problem. - PR #3484: Size Numba logo in docs in em units. Fixes #3313 - PR #3502: just two typos - PR #3506: Document string support - PR #3513: Documentation for parallel diagnostics. - PR #3526: Fix 5 min guide with respect to @njit decl Contributors: - Alex Ford - Andreas Sodeur - Anton Malakhov - Daniel Stender - Ehsan Totoni (core dev) - Henry Schreiner - Marcel Bargull - Matt Cooper - Nick White - Nicolas Hug - rjenc29 - Siu Kwan Lam (core dev) - Stan Seibert (core dev) - Stuart Archibald (core dev) - Todd A. Anderson (core dev) Version 0.40.1¶ This is a PyPI-only patch release to ensure that PyPI wheels can enable the TBB threading backend, and to disable the OpenMP backend in the wheels. Limitations of manylinux1 and variation in user environments can cause segfaults when OpenMP is enabled on wheel builds. Note that this release has no functional changes for users who obtained Numba 0.40.0 via conda. Patches: - PR #3338: Accidentally left Anton off contributor list for 0.40.0 - PR #3374: Disable OpenMP in wheel building - PR #3376: Update 0.40.1 changelog and docs on OpenMP backend Version 0.40.0¶ This release adds a number of major features: - A new GPU backend: kernels for AMD GPUs can now be compiled using the ROCm driver on Linux. - The thread pool implementation used by Numba for automatic multithreading is configurable to use TBB, OpenMP, or the old “workqueue” implementation. (TBB is likely to become the preferred default in a future release.) - New documentation on thread and fork-safety with Numba, along with overall improvements in thread-safety. - Experimental support for executing a block of code inside a nopython mode function in object mode. - Parallel loops now allow arrays as reduction variables - CUDA improvements: FMA, faster float64 atomics on supporting hardware, records in const memory, and improved datatime dtype support - More NumPy functions: vander, tri, triu, tril, fill_diagonal General Enhancements: - PR #3017: Add facility to support with-contexts - PR #3033: Add support for multidimensional CFFI arrays - PR #3122: Add inliner to object mode pipeline - PR #3127: Support for reductions on arrays. - PR #3145: Support for np.fill_diagonal - PR #3151: Keep a queue of references to last N deserialized functions. Fixes #3026 - PR #3154: Support use of list() if typeable. - PR #3166: Objmode with-block - PR #3179: Updates for llvmlite 0.25 - PR #3181: Support function extension in alias analysis - PR #3189: Support literal constants in typing of object methods - PR #3190: Support passing closures as literal values in typing - PR #3199: Support inferring stencil index as constant in simple unary expressions - PR #3202: Threading layer backend refactor/rewrite/reinvention! - PR #3209: Support for np.tri, np.tril and np.triu - PR #3211: Handle unpacking in building tuple (BUILD_TUPLE_UNPACK opcode) - PR #3212: Support for np.vander - PR #3227: Add NumPy 1.15 support - PR #3272: Add MemInfo_data to runtime._nrt_python.c_helpers - PR #3273: Refactor. Removing thread-local-storage based context nesting. - PR #3278: compiler threadsafety lockdown - PR #3291: Add CPU count and CFS restrictions info to numba -s. CUDA Enhancements: - PR #3152: Use cuda driver api to get best blocksize for best occupancy - PR #3165: Add FMA intrinsic support - PR #3172: Use float64 add Atomics, Where Available - PR #3186: Support Records in CUDA Const Memory - PR #3191: CUDA: fix log size - PR #3198: Fix GPU datetime timedelta types usage - PR #3221: Support datetime/timedelta scalar argument to a CUDA kernel. - PR #3259: Add DeviceNDArray.view method to reinterpret data as a different type. - PR #3310: Fix IPC handling of sliced cuda array. ROCm Enhancements: - PR #3023: Support for AMDGCN/ROCm. - PR #3108: Add ROC info to numba -s output. - PR #3176: Move ROC vectorize init to npyufunc - PR #3177: Add auto_synchronize support to ROC stream - PR #3178: Update ROC target documentation. - PR #3294: Add compiler lock to ROC compilation path. - PR #3280: Add wavebits property to the HSA Agent. - PR #3281: Fix ds_permute types and add tests Continuous Integration / Testing: - PR #3091: Remove old recipes, switch to test config based on env var. - PR #3094: Add higher ULP tolerance for products in complex space. - PR #3096: Set exit on error in incremental scripts - PR #3109: Add skip to test needing jinja2 if no jinja2. - PR #3125: Skip cudasim only tests - PR #3126: add slack, drop flowdock - PR #3147: Improve error message for arg type unsupported during typing. - PR #3128: Fix recipe/build for jetson tx2/ARM - PR #3167: In build script activate env before installing. - PR #3180: Add skip to broken test. - PR #3216: Fix libcuda.so loading in some container setup - PR #3224: Switch to new Gitter notification webhook URL and encrypt it - PR #3235: Add 32bit Travis CI jobs - PR #3257: This adds scipy/ipython back into windows conda test phase. Fixes: - PR #3038: Fix random integer generation to match results from NumPy. - PR #3045: Fix #3027 - Numba reassigns sys.stdout - PR #3059: Handler for known LoweringErrors. - PR #3060: Adjust attribute error for NumPy functions. - PR #3067: Abort simulator threads on exception in thread block. - PR #3079: Implement +/-(types.boolean) Fix #2624 - PR #3080: Compute np.var and np.std correctly for complex types. - PR #3088: Fix #3066 (array.dtype.type in prange) - PR #3089: Fix invalid ParallelAccelerator hoisting issue. - PR #3136: Fix #3135 (lowering error) - PR #3137: Fix for issue3103 (race condition detection) - PR #3142: Fix Issue #3139 (parfors reuse of reduction variable across prange blocks) - PR #3148: Remove dead array equal @infer code - PR #3153: Fix canonicalize_array_math typing for calls with kw args - PR #3156: Fixes issue with missing pygments in testing and adds guards. - PR #3168: Py37 bytes output fix. - PR #3171: Fix #3146. Fix CFUNCTYPE void* return-type handling - PR #3193: Fix setitem/getitem resolvers - PR #3222: Fix #3214. Mishandling of POP_BLOCK in while True loop. - PR #3230: Fixes liveness analysis issue in looplifting - PR #3233: Fix return type difference for 32bit ctypes.c_void_p - PR #3234: Fix types and layout for np.where. - PR #3237: Fix DeprecationWarning about imp module - PR #3241: Fix #3225. Normalize 0nd array to scalar in typing of indexing code. - PR #3256: Fix #3251: Move imports of ABCs to collections.abc for Python >= 3.3 - PR #3292: Fix issue3279. - PR #3302: Fix error due to mismatching dtype Documentation Updates: - PR #3104: Workaround for #3098 (test_optional_unpack Heisenbug) - PR #3132: Adds an ~5 minute guide to Numba. - PR #3194: Fix docs RE: np.random generator fork/thread safety - PR #3242: Page with Numba talks and tutorial links - PR #3258: Allow users to choose the type of issue they are reporting. - PR #3260: Fixed broken link - PR #3266: Fix cuda pointer ownership problem with user/externally allocated pointer - PR #3269: Tweak typography with CSS - PR #3270: Update FAQ for functions passed as arguments - PR #3274: Update installation instructions - PR #3275: Note pyobject and voidptr are types in docs - PR #3288: Do not need to call parallel optimizations “experimental” anymore - PR #3318: Tweak spacing to avoid search box wrapping onto second line Contributors: - Anton Malakhov - Alex Ford - Anthony Bisulco - Ehsan Totoni (core dev) - Leonard Lausen - Matthew Petroff - Nick White - Ray Donnelly - rjenc29 - Siu Kwan Lam (core dev) - Stan Seibert (core dev) - Stuart Archibald (core dev) - Stuart Reynolds - Todd A. Anderson (core dev) Version 0.39.0¶ Here are the highlights for the Numba 0.39.0 release. - This is the first version that supports Python 3.7. - With help from Intel, we have fixed the issues with SVML support (related issues #2938, #2998, #3006). - List has gained support for containing reference-counted types like NumPy arrays and list. Note, list still cannot hold heterogeneous types. - We have made a significant change to the internal calling-convention, which should be transparent to most users, to allow for a future feature that will permitting jumping back into python-mode from a nopython-mode function. This also fixes a limitation to print that disabled its use from nopython functions that were deep in the call-stack. - For CUDA GPU support, we added a __cuda_array_interface__ following the NumPy array interface specification to allow Numba to consume externally defined device arrays. We have opened a corresponding pull request to CuPy to test out the concept and be able to use a CuPy GPU array. - The Numba dispatcher inspect_types() method now supports the kwarg pretty which if set to True will produce ANSI/HTML output, showing the annotated types, when invoked from ipython/jupyter-notebook respectively. - The NumPy functions ndarray.dot, np.percentile and np.nanpercentile, and np.unique are now supported. - Numba now supports the use of a per-project configuration file to permanently set behaviours typically set via NUMBA_* family environment variables. - Support for the ppc64le architecture has been added. Enhancements: - PR #2793: Simplify and remove javascript from html_annotate templates. - PR #2840: Support list of refcounted types - PR #2902: Support for np.unique - PR #2926: Enable fence for all architecture and add developer notes - PR #2928: Making error about untyped list more informative. - PR #2930: Add configuration file and color schemes. - PR #2932: Fix encoding to ‘UTF-8’ in check_output decode. - PR #2938: Python 3.7 compat: _Py_Finalizing becomes _Py_IsFinalizing() - PR #2939: Comprehensive SVML unit test - PR #2946: Add support for ndarray.dot method and tests. - PR #2953: percentile and nanpercentile - PR #2957: Add new 3.7 opcode support. - PR #2963: Improve alias analysis to be more comprehensive - PR #2984: Support for namedtuples in array analysis - PR #2986: Fix environment propagation - PR #2990: Improve function call matching for intrinsics - PR #3002: Second pass at error rewrites (interpreter errors). - PR #3004: Add numpy.empty to the list of pure functions. - PR #3008: Augment SVML detection with llvmlite SVML patch detection. - PR #3012: Make use of the common spelling of heterogeneous/homogeneous. - PR #3032: Fix pycc ctypes test due to mismatch in calling-convention - PR #3039: Add SVML detection to Numba environment diagnostic tool. - PR #3041: This adds @needs_blas to tests that use BLAS - PR #3056: Require llvmlite>=0.24.0 CUDA Enhancements: - PR #2860: __cuda_array_interface__ - PR #2910: More CUDA intrinsics - PR #2929: Add Flag To Prevent Unneccessary D->H Copies - PR #3037: Add CUDA IPC support on non-peer-accessible devices CI Enhancements: - PR #3021: Update appveyor config. - PR #3040: Add fault handler to all builds - PR #3042: Add catchsegv - PR #3077: Adds optional number of processes for -m in testing Fixes: - PR #2897: Fix line position of delete statement in numba ir - PR #2905: Fix for #2862 - PR #3009: Fix optional type returning in recursive call - PR #3019: workaround and unittest for issue #3016 - PR #3035: [TESTING] Attempt delayed removal of Env - PR #3048: [WIP] Fix cuda tests failure on buildfarm - PR #3054: Make test work on 32-bit - PR #3062: Fix cuda.In freeing devary before the kernel launch - PR #3073: Workaround #3072 - PR #3076: Avoid ignored exception due to missing globals at interpreter teardown Documentation Updates: - PR #2966: Fix syntax in env var docs. - PR #2967: Fix typo in CUDA kernel layout example. - PR #2970: Fix docstring copy paste error. Contributors: The following people contributed to this release. - Anton Malakhov - Ehsan Totoni (core dev) - Julia Tatz - Matthias Bussonnier - Nick White - Ray Donnelly - Siu Kwan Lam (core dev) - Stan Seibert (core dev) - Stuart Archibald (core dev) - Todd A. Anderson (core dev) - Rik-de-Kort - rjenc29 Version 0.38.1¶ This is a critical bug fix release addressing: The bug does not impact users using conda packages from Anaconda or Intel Python Distribution (but it does impact conda-forge). It does not impact users of pip using wheels from PyPI. This only impacts a small number of users where: - The ICC runtime (specifically libsvml) is present in the user’s environment. - The user is using an llvmlite statically linked against a version of LLVM that has not been patched with SVML support. - The platform is 64-bit. The release fixes a code generation path that could lead to the production of incorrect results under the above situation. Fixes: - PR #3007: Augment SVML detection with llvmlite SVML patch detection. Contributors: The following people contributed to this release. - Stuart Archibald (core dev) Version 0.38.0¶ Following on from the bug fix focus of the last release, this release swings back towards the addition of new features and usability improvements based on community feedback. This release is comparatively large! Three key features/ changes to note are: - Numba (via llvmlite) is now backed by LLVM 6.0, general vectorization is improved as a result. A significant long standing LLVM bug that was causing corruption was also found and fixed. - Further considerable improvements in vectorization are made available as Numba now supports Intel’s short vector math library (SVML). Try it out with conda install -c numba icc_rt. - CUDA 8.0 is now the minimum supported CUDA version. Other highlights include: - Bug fixes to parallel=True have enabled more vectorization opportunities when using the ParallelAccelerator technology. - Much effort has gone into improving error reporting and the general usability of Numba. This includes highlighted error messages and performance tips documentation. Try it out with conda install colorama. - A number of new NumPy functions are supported, np.convolve, np.correlate np.reshape, np.transpose, np.permutation, np.real, np.imag, and np.searchsorted now supports the`side` kwarg. Further, np.argsort now supports the kind kwarg with quicksort and mergesort available. - The Numba extension API has gained the ability operate more easily with functions from Cython modules through the use of numba.extending.get_cython_function_address to obtain function addresses for direct use in ctypes.CFUNCTYPE. - Numba now allows the passing of jitted functions (and containers of jitted functions) as arguments to other jitted functions. - The CUDA functionality has gained support for a larger selection of bit manipulation intrinsics, also SELP, and has had a number of bugs fixed. - Initial work to support the PPC64LE platform has been added, full support is however waiting on the LLVM 6.0.1 release as it contains critical patches not present in 6.0.0. It is hoped that any remaining issues will be fixed in the next release. - The capacity for advanced users/compiler engineers to define their own compilation pipelines. Enhancements: - PR #2660: Support bools from cffi in nopython. - PR #2741: Enhance error message for undefined variables. - PR #2744: Add diagnostic error message to test suite discovery failure. - PR #2748: Added Intel SVML optimizations as opt-out choice working by default - PR #2762: Support transpose with axes arguments. - PR #2777: Add support for np.correlate and np.convolve - PR #2779: Implement np.random.permutation - PR #2801: Passing jitted functions as args - PR #2802: Support np.real() and np.imag() - PR #2807: Expose import_cython_function - PR #2821: Add kwarg ‘side’ to np.searchsorted - PR #2822: Adds stable argsort - PR #2832: Fixups for llvmlite 0.23/llvm 6 - PR #2836: Support index method on tuples - PR #2839: Support for np.transpose and np.reshape. - PR #2843: Custom pipeline - PR #2847: Replace signed array access indices in unsiged prange loop body - PR #2859: Add support for improved error reporting. - PR #2880: This adds a github issue template. - PR #2881: Build recipe to clone Intel ICC runtime. - PR #2882: Update TravisCI to test SVML - PR #2893: Add reference to the data buffer in array.ctypes object - PR #2895: Move to CUDA 8.0 Fixes: - PR #2737: Fix #2007 (part 1). Empty array handling in np.linalg. - PR #2738: Fix install_requires to allow pip getting pre-release version - PR #2740: Fix 2208. Generate better error message. - PR #2765: Fix Bit-ness - PR #2780: PowerPC reference counting memory fences - PR #2805: Fix six imports. - PR #2813: Fix #2812: gufunc scalar output bug. - PR #2814: Fix the build post #2727 - PR #2831: Attempt to fix #2473 - PR #2842: Fix issue with test discovery and broken CUDA drivers. - PR #2850: Add rtsys init guard and test. - PR #2852: Skip vectorization test with targets that are not x86 - PR #2856: Prevent printing to stdout in test_extending.py - PR #2864: Correct C code to prevent compiler warnings. - PR #2889: Attempt to fix #2386. - PR #2891: Removed test skipping for inspect_cfg - PR #2898: Add guard to parallel test on unsupported platforms - PR #2907: Update change log for PPC64LE LLVM dependency. - PR #2911: Move build requirement to llvmlite>=0.23.0dev0 - PR #2912: Fix random permutation test. - PR #2914: Fix MD list syntax in issue template. Documentation Updates: - PR #2739: Explicitly state default value of error_model in docstring - PR #2803: DOC: parallel vectorize requires signatures - PR #2829: Add Python 2.7 EOL plan to docs - PR #2838: Use automatic numbering syntax in list. - PR #2877: Add performance tips documentation. - PR #2883: Fix #2872: update rng doc about thread/fork-safety - PR #2908: Add missing link and ref to docs. - PR #2909: Tiny typo correction ParallelAccelerator enhancements/fixes: - PR #2727: Changes to enable vectorization in ParallelAccelerator. - PR #2816: Array analysis for transpose with arbitrary arguments - PR #2874: Fix dead code eliminator not to remove a call with side-effect - PR #2886: Fix ParallelAccelerator arrayexpr repr CUDA enhancements: - PR #2734: More Constants From cuda.h - PR #2767: Add len(..) Support to DeviceNDArray - PR #2778: Add More Device Array API Functions to CUDA Simulator - PR #2824: Add CUDA Primitives for Population Count - PR #2835: Emit selp Instructions to Avoid Branching - PR #2867: Full support for CUDA device attributes CUDA fixes: * PR #2768: Don’t Compile Code on Every Assignment * PR #2878: Fixes a Win64 issue with the test in Pr/2865 Contributors: The following people contributed to this release. - Abutalib Aghayev - Alex Olivas - Anton Malakhov - Dong-hee Na - Ehsan Totoni (core dev) - John Zwinck - Josh Wilson - Kelsey Jordahl - Nick White - Olexa Bilaniuk - Rik-de-Kort - Siu Kwan Lam (core dev) - Stan Seibert (core dev) - Stuart Archibald (core dev) - Thomas Arildsen - Todd A. Anderson (core dev) Version 0.37.0¶ This release focuses on bug fixing and stability but also adds a few new features including support for Numpy 1.14. The key change for Numba core was the long awaited addition of the final tranche of thread safety improvements that allow Numba to be run concurrently on multiple threads without hitting known thread safety issues inside LLVM itself. Further, a number of fixes and enhancements went into the CUDA implementation and ParallelAccelerator gained some new features and underwent some internal refactoring. Misc enhancements: - PR #2627: Remove hacks to make llvmlite threadsafe - PR #2672: Add ascontiguousarray - PR #2678: Add Gitter badge - PR #2691: Fix #2690: add intrinsic to convert array to tuple - PR #2703: Test runner feature: failed-first and last-failed - PR #2708: Patch for issue #1907 - PR #2732: Add support for array.fill Misc Fixes: - PR #2610: Fix #2606 lowering of optional.setattr - PR #2650: Remove skip for win32 cosine test - PR #2668: Fix empty_like from readonly arrays. - PR #2682: Fixes 2210, remove _DisableJitWrapper - PR #2684: Fix #2340, generator error yielding bool - PR #2693: Add travis-ci testing of NumPy 1.14, and also check on Python 2.7 - PR #2694: Avoid type inference failure due to a typing template rejection - PR #2695: Update llvmlite version dependency. - PR #2696: Fix tuple indexing codegeneration for empty tuple - PR #2698: Fix #2697 by deferring deletion in the simplify_CFG loop. - PR #2701: Small fix to avoid tempfiles being created in the current directory - PR #2725: Fix 2481, LLVM IR parsing error due to mutated IR - PR #2726: Fix #2673: incorrect fork error msg. - PR #2728: Alternative to #2620. Remove dead code ByteCodeInst.get. - PR #2730: Add guard for test needing SciPy/BLAS Documentation updates: - PR #2670: Update communication channels - PR #2671: Add docs about diagnosing loop vectorizer - PR #2683: Add docs on const arg requirements and on const mem alloc - PR #2722: Add docs on numpy support in cuda - PR #2724: Update doc: warning about unsupported arguments ParallelAccelerator enhancements/fixes: Parallel support for np.arange and np.linspace, also np.mean, np.std and np.var are added. This was performed as part of a general refactor and cleanup of the core ParallelAccelerator code. - PR #2674: Core pa - PR #2704: Generate Dels after parfor sequential lowering - PR #2716: Handle matching directly supported functions CUDA enhancements: - PR #2665: CUDA DeviceNDArray: Support numpy tranpose API - PR #2681: Allow Assigning to DeviceNDArrays - PR #2702: Make DummyArray do High Dimensional Reshapes - PR #2714: Use CFFI to Reuse Code CUDA fixes: - PR #2667: Fix CUDA DeviceNDArray slicing - PR #2686: Fix #2663: incorrect offset when indexing cuda array. - PR #2687: Ensure Constructed Stream Bound - PR #2706: Workaround for unexpected warp divergence due to exception raising code - PR #2707: Fix regression: cuda test submodules not loading properly in runtests - PR #2731: Use more challenging values in slice tests. - PR #2720: A quick testsuite fix to not run the new cuda testcase in the multiprocess pool Contributors: The following people contributed to this release. - Coutinho Menezes Nilo - Daniel - Ehsan Totoni - Nick White - Paul H. Liu - Siu Kwan Lam - Stan Seibert - Stuart Archibald - Todd A. Anderson Version 0.36.2¶ This is a bugfix release that provides minor changes to address: - PR #2645: Avoid CPython bug with execin older 2.7.x. - PR #2652: Add support for CUDA 9. Version 0.36.1¶ This release continues to add new features to the work undertaken in partnership with Intel on ParallelAccelerator technology. Other changes of note include the compilation chain being updated to use LLVM 5.0 and the production of conda packages using conda-build 3 and the new compilers that ship with it. NOTE: A version 0.36.0 was tagged for internal use but not released. ParallelAccelerator: NOTE: The ParallelAccelerator technology is under active development and should be considered experimental. New features relating to ParallelAccelerator, from work undertaken with Intel, include the addition of the @stencil decorator for ease of implementation of stencil-like computations, support for general reductions, and slice and range fusion for parallel slice/bit-array assignments. Documentation on both the use and implementation of the above has been added. Further, a new debug environment variable NUMBA_DEBUG_ARRAY_OPT_STATS is made available to give information about which operators/calls are converted to parallel for-loops. ParallelAccelerator features: - PR #2457: Stencil Computations in ParallelAccelerator - PR #2548: Slice and range fusion, parallelizing bitarray and slice assignment - PR #2516: Support general reductions in ParallelAccelerator ParallelAccelerator fixes: - PR #2540: Fix bug #2537 - PR #2566: Fix issue #2564. - PR #2599: Fix nested multi-dimensional parfor type inference issue - PR #2604: Fixes for stencil tests and cmath sin(). - PR #2605: Fixes issue #2603. Additional features of note: This release of Numba (and llvmlite) is updated to use LLVM version 5.0 as the compiler back end, the main change to Numba to support this was the addition of a custom symbol tracker to avoid the calls to LLVM’s ExecutionEngine that was crashing when asking for non-existent symbol addresses. Further, the conda packages for this release of Numba are built using conda build version 3 and the new compilers/recipe grammar that are present in that release. - PR #2568: Update for LLVM 5 - PR #2607: Fixes abort when getting address to “nrt_unresolved_abort” - PR #2615: Working towards conda build 3 Thanks to community feedback and bug reports, the following fixes were also made. Misc fixes/enhancements: - PR #2534: Add tuple support to np.take. - PR #2551: Rebranding fix - PR #2552: relative doc links - PR #2570: Fix issue #2561, handle missing successor on loop exit - PR #2588: Fix #2555. Disable libpython.so linking on linux - PR #2601: Update llvmlite version dependency. - PR #2608: Fix potential cache file collision - PR #2612: Fix NRT test failure due to increased overhead when running in coverage - PR #2619: Fix dubious pthread_cond_signal not in lock - PR #2622: Fix np.nanmedian for all NaN case. - PR #2633: Fix markdown in CONTRIBUTING.md - PR #2635: Make the dependency on compilers for AOT optional. CUDA support fixes: - PR #2523: Fix invalid cuda context in memory transfer calls in another thread - PR #2575: Use CPU to initialize xoroshiro states for GPU RNG. Fixes #2573 - PR #2581: Fix cuda gufunc mishandling of scalar arg as array and out argument Version 0.35.0¶ This release includes some exciting new features as part of the work performed in partnership with Intel on ParallelAccelerator technology. There are also some additions made to Numpy support and small but significant fixes made as a result of considerable effort spent chasing bugs and implementing stability improvements. ParallelAccelerator: NOTE: The ParallelAccelerator technology is under active development and should be considered experimental. New features relating to ParallelAccelerator, from work undertaken with Intel, include support for a larger range of np.random functions in parallel mode, printing Numpy arrays in no Python mode, the capacity to initialize Numpy arrays directly from list comprehensions, and the axis argument to .sum(). Documentation on the ParallelAccelerator technology implementation has also been added. Further, a large amount of work on equivalence relations was undertaken to enable runtime checks of broadcasting behaviours in parallel mode. ParallelAccelerator features: - PR #2400: Array comprehension - PR #2405: Support printing Numpy arrays - PR #2438: from Support more np.random functions in ParallelAccelerator - PR #2482: Support for sum with axis in nopython mode. - PR #2487: Adding developer documentation for ParallelAccelerator technology. - PR #2492: Core PA refactor adds assertions for broadcast semantics ParallelAccelerator fixes: - PR #2478: Rename cfg before parfor translation (#2477) - PR #2479: Fix broken array comprehension tests on unsupported platforms - PR #2484: Fix array comprehension test on win64 - PR #2506: Fix for 32-bit machines. Additional features of note: Support for np.take, np.finfo, np.iinfo and np.MachAr in no Python mode is added. Further, three new environment variables are added, two for overriding CPU target/features and another to warn if parallel=True was set no such transform was possible. - PR #2490: Implement np.take and ndarray.take - PR #2493: Display a warning if parallel=True is set but not possible. - PR #2513: Add np.MachAr, np.finfo, np.iinfo - PR #2515: Allow environ overriding of cpu target and cpu features. Due to expansion of the test farm and a focus on fixing bugs, the following fixes were also made. Misc fixes/enhancements: - PR #2455: add contextual information to runtime errors - PR #2470: Fixes #2458, poor performance in np.median - PR #2471: Ensure LLVM threadsafety in {g,}ufunc building. - PR #2494: Update doc theme - PR #2503: Remove hacky code added in 2482 and feature enhancement - PR #2505: Serialise env mutation tests during multithreaded testing. - PR #2520: Fix failing cpu-target override tests CUDA support fixes: - PR #2504: Enable CUDA toolkit version testing - PR #2509: Disable tests generating code unavailable in lower CC versions. - PR #2511: Fix Windows 64 bit CUDA tests. Version 0.34.0¶ This release adds a significant set of new features arising from combined work with Intel on ParallelAccelerator technology. It also adds list comprehension and closure support, support for Numpy 1.13 and a new, faster, CUDA reduction algorithm. For Linux users this release is the first to be built on Centos 6, which will be the new base platform for future releases. Finally a number of thread-safety, type inference and other smaller enhancements and bugs have been fixed. ParallelAccelerator features: NOTE: The ParallelAccelerator technology is under active development and should be considered experimental. The ParallelAccelerator technology is accessed via a new “nopython” mode option “parallel”. The ParallelAccelerator technology attempts to identify operations which have parallel semantics (for instance adding a scalar to a vector), fuse together adjacent such operations, and then parallelize their execution across a number of CPU cores. This is essentially auto-parallelization. In addition to the auto-parallelization feature, explicit loop based parallelism is made available through the use of prange in place of range as a loop iterator. More information and examples on both auto-parallelization and prange are available in the documentation and examples directory respectively. As part of the necessary work for ParallelAccelerator, support for closures and list comprehensions is added: - PR #2318: Transfer ParallelAccelerator technology to Numba - PR #2379: ParallelAccelerator Core Improvements - PR #2367: Add support for len(range(…)) - PR #2369: List comprehension - PR #2391: Explicit Parallel Loop Support (prange) The ParallelAccelerator features are available on all supported platforms and Python versions with the exceptions of (with view of supporting in a future release): - The combination of Windows operating systems with Python 2.7. - Systems running 32 bit Python. CUDA support enhancements: - PR #2377: New GPU reduction algorithm CUDA support fixes: - PR #2397: Fix #2393, always set alignment of cuda static memory regions Misc Fixes: - PR #2373, Issue #2372: 32-bit compatibility fix for parfor related code - PR #2376: Fix #2375 missing stdint.h for py2.7 vc9 - PR #2378: Fix deadlock in parallel gufunc when kernel acquires the GIL. - PR #2382: Forbid unsafe casting in bitwise operation - PR #2385: docs: fix Sphinx errors - PR #2396: Use 64-bit RHS operand for shift - PR #2404: Fix threadsafety logic issue in ufunc compilation cache. - PR #2424: Ensure consistent iteration order of blocks for type inference. - PR #2425: Guard code to prevent the use of ‘parallel’ on win32 + py27 - PR #2426: Basic test for Enum member type recovery. - PR #2433: Fix up the parfors tests with respect to windows py2.7 - PR #2442: Skip tests that need BLAS/LAPACK if scipy is not available. - PR #2444: Add test for invalid array setitem - PR #2449: Make the runtime initialiser threadsafe - PR #2452: Skip CFG test on 64bit windows Misc Enhancements: - PR #2366: Improvements to IR utils - PR #2388: Update README.rst to indicate the proper version of LLVM - PR #2394: Upgrade to llvmlite 0.19.* - PR #2395: Update llvmlite version to 0.19 - PR #2406: Expose environment object to ufuncs - PR #2407: Expose environment object to target-context inside lowerer - PR #2413: Add flags to pass through to conda build for buildbot - PR #2414: Add cross compile flags to local recipe - PR #2415: A few cleanups for rewrites - PR #2418: Add getitem support for Enum classes - PR #2419: Add support for returning enums in vectorize - PR #2421: Add copyright notice for Intel contributed files. - PR #2422: Patch code base to work with np 1.13 release - PR #2448: Adds in warning message when using ‘parallel’ if cache=True - PR #2450: Add test for keyword arg on .sum-like and .cumsum-like array methods Version 0.33.0¶ This release resolved several performance issues caused by atomic reference counting operations inside loop bodies. New optimization passes have been added to reduce the impact of these operations. We observe speed improvements between 2x-10x in affected programs due to the removal of unnecessary reference counting operations. There are also several enhancements to the CUDA GPU support: - A GPU random number generator based on xoroshiro128+ algorithm is added. See details and examples in documentation. @cuda.jitCUDA kernels can now call @jitand @njitCPU functions and they will automatically be compiled as CUDA device functions. - CUDA IPC memory API is exposed for sharing memory between proceses. See usage details in documentation. Reference counting enhancements: - PR #2346, Issue #2345, #2248: Add extra refcount pruning after inlining - PR #2349: Fix refct pruning not removing refct op with tail call. - PR #2352, Issue #2350: Add refcount pruning pass for function that does not need refcount CUDA support enhancements: - PR #2023: Supports CUDA IPC for device array - PR #2343, Issue #2335: Allow CPU jit decorated function to be used as cuda device function - PR #2347: Add random number generator support for CUDA device code - PR #2361: Update autotune table for CC: 5.3, 6.0, 6.1, 6.2 Misc fixes: - PR #2362: Avoid test failure due to typing to int32 on 32-bit platforms - PR #2359: Fixed nogil example that threw a TypeError when executed. - PR #2357, Issue #2356: Fix fragile test that depends on how the script is executed. - PR #2355: Fix cpu dispatcher referenced as attribute of another module - PR #2354: Fixes an issue with caching when function needs NRT and refcount pruning - PR #2342, Issue #2339: Add warnings to inspection when it is used on unserialized cached code - PR #2329, Issue #2250: Better handling of missing op codes Misc enhancements: - PR #2360: Adds missing values in error mesasge interp. - PR #2353: Handle when get_host_cpu_features() raises RuntimeError - PR #2351: Enable SVML for erf/erfc/gamma/lgamma/log2 - PR #2344: Expose error_model setting in jit decorator - PR #2337: Align blocking terminate support for fork() with new TBB version - PR #2336: Bump llvmlite version to 0.18 - PR #2330: Core changes in PR #2318 Version 0.32.0¶ In this release, we are upgrading to LLVM 4.0. A lot of work has been done to fix many race-condition issues inside LLVM when the compiler is used concurrently, which is likely when Numba is used with Dask. Improvements: - PR #2322: Suppress test error due to unknown but consistent error with tgamma - PR #2320: Update llvmlite dependency to 0.17 - PR #2308: Add details to error message on why cuda support is disabled. - PR #2302: Add os x to travis - PR #2294: Disable remove_module on MCJIT due to memory leak inside LLVM - PR #2291: Split parallel tests and recycle workers to tame memory usage - PR #2253: Remove the pointer-stuffing hack for storing meminfos in lists Fixes: - PR #2331: Fix a bug in the GPU array indexing - PR #2326: Fix #2321 docs referring to non-existing function. - PR #2316: Fixing more race-condition problems - PR #2315: Fix #2314. Relax strict type check to allow optional type. - PR #2310: Fix race condition due to concurrent compilation and cache loading - PR #2304: Fix intrinsic 1st arg not a typing.Context as stated by the docs. - PR #2287: Fix int64 atomic min-max - PR #2286: Fix #2285 @overload_method not linking dependent libs - PR #2303: Missing import statements to interval-example.rst Version 0.31.0¶ In this release, we added preliminary support for debugging with GDB version >= 7.0. The feature is enabled by setting the debug=True compiler option, which causes GDB compatible debug info to be generated. The CUDA backend also gained limited debugging support so that source locations are showed in memory-checking and profiling tools. For details, see Troubleshooting and tips. Also, we added the fastmath=True compiler option to enable unsafe floating-point transformations, which allows LLVM to auto-vectorize more code. Other important changes include upgrading to LLVM 3.9.1 and adding support for Numpy 1.12. Improvements: - PR #2281: Update for numpy1.12 - PR #2278: Add CUDA atomic.{max, min, compare_and_swap} - PR #2277: Add about section to conda recipies to identify license and other metadata in Anaconda Cloud - PR #2271: Adopt itanium C++-style mangling for CPU and CUDA targets - PR #2267: Add fastmath flags - PR #2261: Support dtype.type - PR #2249: Changes for llvm3.9 - PR #2234: Bump llvmlite requirement to 0.16 and add install_name_tool_fixer to mviewbuf for OS X - PR #2230: Add python3.6 to TravisCi - PR #2227: Enable caching for gufunc wrapper - PR #2170: Add debugging support - PR #2037: inspect_cfg() for easier visualization of the function operation Fixes: - PR #2274: Fix nvvm ir patch in mishandling “load” - PR #2272: Fix breakage to cuda7.5 - PR #2269: Fix caching of copy_strides kernel in cuda.reduce - PR #2265: Fix #2263: error when linking two modules with dynamic globals - PR #2252: Fix path separator in test - PR #2246: Fix overuse of memory in some system with fork - PR #2241: Fix #2240: __module__ in dynamically created function not a str - PR #2239: Fix fingerprint computation failure preventing fallback Version 0.30.1¶ This is a bug-fix release to enable Python 3.6 support. In addition, there is now early Intel TBB support for parallel ufuncs when building from source with TBBROOT defined. The TBB feature is not enabled in our official builds. Fixes: - PR #2232: Fix name clashes with _Py_hashtable_xxx in Python 3.6. Improvements: - PR #2217: Add Intel TBB threadpool implementation for parallel ufunc. Version 0.30.0¶ This release adds preliminary support for Python 3.6, but no official build is available yet. A new system reporting tool ( numba --sysinfo) is added to provide system information to help core developers in replication and debugging. See below for other improvements and bug fixes. Improvements: - PR #2209: Support Python 3.6. - PR #2175: Support np.trace(), np.outer()and np.kron(). - PR #2197: Support np.nanprod(). - PR #2190: Support caching for ufunc. - PR #2186: Add system reporting tool. Fixes: - PR #2214, Issue #2212: Fix memory error with ndenumerate and flat iterators. - PR #2206, Issue #2163: Fix zip()consuming extra elements in early exhaustion. - PR #2185, Issue #2159, #2169: Fix rewrite pass affecting objmode fallback. - PR #2204, Issue #2178: Fix annotation for liftedloop. - PR #2203: Fix Appveyor segfault with Python 3.5. - PR #2202, Issue #2198: Fix target context not initialized when loading from ufunc cache. - PR #2172, Issue #2171: Fix optional type unpacking. - PR #2189, Issue #2188: Disable freezing of big (>1MB) global arrays. - PR #2180, Issue #2179: Fix invalid variable version in looplifting. - PR #2156, Issue #2155: Fix divmod, floordiv segfault on CUDA. Version 0.29.0¶ This release extends the support of recursive functions to include direct and indirect recursion without explicit function type annotations. See new example in examples/mergesort.py. Newly supported numpy features include array stacking functions, np.linalg.eig* functions, np.linalg.matrix_power, np.roots and array to array broadcasting in assignments. This release depends on llvmlite 0.14.0 and supports CUDA 8 but it is not required. Improvements: - PR #2130, #2137: Add type-inferred recursion with docs and examples. - PR #2134: Add np.linalg.matrix_power. - PR #2125: Add np.roots. - PR #2129: Add np.linalg.{eigvals,eigh,eigvalsh}. - PR #2126: Add array-to-array broadcasting. - PR #2069: Add hstack and related functions. - PR #2128: Allow for vectorizing a jitted function. (thanks to @dhirschfeld) - PR #2117: Update examples and make them test-able. - PR #2127: Refactor interpreter class and its results. Fixes: - PR #2149: Workaround MSVC9.0 SP1 fmod bug kb982107. - PR #2145, Issue #2009: Fixes kwargs for jitclass __init__method. - PR #2150: Fix slowdown in objmode fallback. - PR #2050, Issue #1259: Fix liveness problem with some generator loops. - PR #2072, Issue #1995: Right shift of unsigned LHS should be logical. - PR #2115, Issue #1466: Fix inspect_types() error due to mangled variable name. - PR #2119, Issue #2118: Fix array type created from record-dtype. - PR #2122, Issue #1808: Fix returning a generator due to datamodel error. Version 0.28.0¶ Amongst other improvements, this version improves again the level of support for linear algebra – functions from the numpy.linalg module. Also, our random generator is now guaranteed to be thread-safe and fork-safe. Improvements: - PR #2019: Add the @intrinsicdecorator to define low-level subroutines callable from JIT functions (this is considered a private API for now). - PR #2059: Implement np.concatenateand np.stack. - PR #2048: Make random generation fork-safe and thread-safe, producing independent streams of random numbers for each thread or process. - PR #2031: Add documentation of floating-point pitfalls. - Issue #2053: Avoid polling in parallel CPU target (fixes severe performance regression on Windows). - Issue #2029: Make default arguments fast. - PR #2052: Add logging to the CUDA driver. - PR #2049: Implement the built-in divmod()function. - PR #2036: Implement the argsort()method on arrays. - PR #2046: Improving CUDA memory management by deferring deallocations until certain thresholds are reached, so as to avoid breaking asynchronous execution. - PR #2040: Switch the CUDA driver implementation to use CUDA’s “primary context” API. - PR #2017: Allow min(tuple)and max(tuple). - PR #2039: Reduce fork() detection overhead in CUDA. - PR #2021: Handle structured dtypes with titles. - PR #1996: Rewrite looplifting as a transformation on Numba IR. - PR #2014: Implement np.linalg.matrix_rank. - PR #2012: Implement np.linalg.cond. - PR #1985: Rewrite even trivial array expressions, which opens the door for other optimizations (for example, array ** 2can be converted into array * array). - PR #1950: Have typeof()always raise ValueError on failure. Previously, it would either raise or return None, depending on the input. - PR #1994: Implement np.linalg.norm. - PR #1987: Implement np.linalg.detand np.linalg.slogdet. - Issue #1979: Document integer width inference and how to workaround. - PR #1938: Numba is now compatible with LLVM 3.8. - PR #1967: Restrict np.linalgfunctions to homogeneous dtypes. Users wanting to pass mixed-typed inputs have to convert explicitly, which makes the performance implications more obvious. Fixes: - PR #2006: array(float32) ** intshould return array(float32). - PR #2044: Allow reshaping empty arrays. - Issue #2051: Fix refcounting issue when concatenating tuples. - Issue #2000: Make Numpy optional for setup.py, to allow pip installto work without Numpy pre-installed. - PR #1989: Fix assertion in Dispatcher.disable_compile(). - Issue #2028: Ignore filesystem errors when caching from multiple processes. - Issue #2003: Allow unicode variable and function names (on Python 3). - Issue #1998: Fix deadlock in parallel ufuncs that reacquire the GIL. - PR #1997: Fix random crashes when AOT compiling on certain Windows platforms. - Issue #1988: Propagate jitclass docstring. - Issue #1933: Ensure array constants are emitted with the right alignment. Version 0.27.0¶ Improvements: - Issue #1976: improve error message when non-integral dimensions are given to a CUDA kernel. - PR #1970: Optimize the power operator with a static exponent. - PR #1710: Improve contextual information for compiler errors. - PR #1961: Support printing constant strings. - PR #1959: Support more types in the print() function. - PR #1823: Support compute_50in CUDA backend. - PR #1955: Support np.linalg.pinv. - PR #1896: Improve the SmartArrayAPI. - PR #1947: Support np.linalg.solve. - Issue #1943: Improve error message when an argument fails typing.4 - PR #1927: Support np.linalg.lstsq. - PR #1934: Use system functions for hypot() where possible, instead of our own implementation. - PR #1929: Add cffi support to @cfuncobjects. - PR #1932: Add user-controllable thread pool limits for parallel CPU target. - PR #1928: Support self-recursion when the signature is explicit. - PR #1890: List all lowering implementations in the developer docs. - Issue #1884: Support np.lib.stride_tricks.as_strided(). Fixes: - Issue #1960: Fix sliced assignment when source and destination areas are overlapping. - PR #1963: Make CUDA print() atomic. - PR #1956: Allow 0d array constants. - Issue #1945: Allow using Numpy ufuncs in AOT compiled code. - Issue #1916: Fix documentation example for @generated_jit. - Issue #1926: Fix regression when caching functions in an IPython session. - Issue #1923: Allow non-intp integer arguments to carray() and farray(). - Issue #1908: Accept non-ASCII unicode docstrings on Python 2. - Issue #1874: Allow del container[key]in object mode. - Issue #1913: Fix set insertion bug when the lookup chain contains deleted entries. - Issue #1911: Allow function annotations on jitclass methods. Version 0.26.0¶ This release adds support for cfunc decorator for exporting numba jitted functions to 3rd party API that takes C callbacks. Most of the overhead of using jitclasses inside the interpreter are eliminated. Support for decompositions in numpy.linalg are added. Finally, Numpy 1.11 is supported. Improvements: - PR #1889: Export BLAS and LAPACK wrappers for pycc. - PR #1888: Faster array power. - Issue #1867: Allow “out” keyword arg for dufuncs. - PR #1871: carray()and farray()for creating arrays from pointers. - PR #1855: @cfuncdecorator for exporting as ctypes function. - PR #1862: Add support for numpy.linalg.qr. - PR #1851: jitclass support for ‘_’ and ‘__’ prefixed attributes. - PR #1842: Optimize jitclass in Python interpreter. - Issue #1837: Fix CUDA simulator issues with device function. - PR #1839: Add support for decompositions from numpy.linalg. - PR #1829: Support Python enums. - PR #1828: Add support for numpy.random.rand()`and numpy.random.randn() - Issue #1825: Use of 0-darray in place of scalar index. - Issue #1824: Scalar arguments to object mode gufuncs. - Issue #1813: Let bitwise bool operators return booleans, not integers. - Issue #1760: Optional arguments in generators. - PR #1780: Numpy 1.11 support. Version 0.25.0¶ This release adds support for set objects in nopython mode. It also adds support for many missing Numpy features and functions. It improves Numba’s compatibility and performance when using a distributed execution framework such as dask, distributed or Spark. Finally, it removes compatibility with Python 2.6, Python 3.3 and Numpy 1.6. Improvements: - Issue #1800: Add erf(), erfc(), gamma() and lgamma() to CUDA targets. - PR #1793: Implement more Numpy functions: np.bincount(), np.diff(), np.digitize(), np.histogram(), np.searchsorted() as well as NaN-aware reduction functions (np.nansum(), np.nanmedian(), etc.) - PR #1789: Optimize some reduction functions such as np.sum(), np.prod(), np.median(), etc. - PR #1752: Make CUDA features work in dask, distributed and Spark. - PR #1787: Support np.nditer() for fast multi-array indexing with broadcasting. - PR #1799: Report JIT-compiled functions as regular Python functions when profiling (allowing to see the filename and line number where a function is defined). - PR #1782: Support np.any() and np.all(). - Issue #1788: Support the iter() and next() built-in functions. - PR #1778: Support array.astype(). - Issue #1775: Allow the user to set the target CPU model for AOT compilation. - PR #1758: Support creating random arrays using the sizeparameter to the np.random APIs. - PR #1757: Support len() on array.flat objects. - PR #1749: Remove Numpy 1.6 compatibility. - PR #1748: Remove Python 2.6 and 3.3 compatibility. - PR #1735: Support the not inoperator as well as operator.contains(). - PR #1724: Support homogeneous sets in nopython mode. - Issue #875: make compilation of array constants faster. Fixes: - PR #1795: Fix a massive performance issue when calling Numba functions with distributed, Spark or a similar mechanism using serialization. - Issue #1784: Make jitclasses usable with NUMBA_DISABLE_JIT=1. - Issue #1786: Allow using linear algebra functions when profiling. - Issue #1796: Fix np.dot() memory leak on non-contiguous inputs. - PR #1792: Fix static negative indexing of tuples. - Issue #1771: Use fallback cache directory when __pycache__ isn’t writable, such as when user code is installed in a system location. - Issue #1223: Use Numpy error model in array expressions (e.g. division by zero returns infor naninstead of raising an error). - Issue #1640: Fix np.random.binomial() for large n values. - Issue #1643: Improve error reporting when passing an invalid spec to @jitclass. - PR #1756: Fix slicing with a negative step and an omitted start. Version 0.24.0¶ This release introduces several major changes, including the @generated_jit decorator for flexible specializations as with Julia’s “ @generated” macro, or the SmartArray array wrapper type that allows seamless transfer of array data between the CPU and the GPU. This will be the last version to support Python 2.6, Python 3.3 and Numpy 1.6. Improvements: - PR #1723: Improve compatibility of JIT functions with the Python profiler. - PR #1509: Support array.ravel() and array.flatten(). - PR #1676: Add SmartArray type to support transparent data management in multiple address spaces (host & GPU). - PR #1689: Reduce startup overhead of importing Numba. - PR #1705: Support registration of CFFI types as corresponding to known Numba types. - PR #1686: Document the extension API. - PR #1698: Improve warnings raised during type inference. - PR #1697: Support np.dot() and friends on non-contiguous arrays. - PR #1692: cffi.from_buffer() improvements (allow more pointer types, allow non-Numpy buffer objects). - PR #1648: Add the @generated_jitdecorator. - PR #1651: Implementation of np.linalg.inv using LAPACK. Thanks to Matthieu Dartiailh. - PR #1674: Support np.diag(). - PR #1673: Improve error message when looking up an attribute on an unknown global. - Issue #1569: Implement runtime check for the LLVM locale bug. - PR #1612: Switch to LLVM 3.7 in sync with llvmlite. - PR #1624: Allow slice assignment of sequence to array. - PR #1622: Support slicing tuples with a constant slice. Fixes: - Issue #1722: Fix returning an optional boolean (bool or None). - Issue #1734: NRT decref bug when variable is del’ed before being defined, leading to a possible memory leak. - PR #1732: Fix tuple getitem regression for CUDA target. - PR #1718: Mishandling of optional to optional casting. - PR #1714: Fix .compile() on a JIT function not respecting ._can_compile. - Issue #1667: Fix np.angle() on arrays. - Issue #1690: Fix slicing with an omitted stop and a negative step value. - PR #1693: Fix gufunc bug in handling scalar formal arg with non-scalar input value. - PR #1683: Fix parallel testing under Windows. - Issue #1616: Use system-provided versions of C99 math where possible. - Issue #1652: Reductions of bool arrays (e.g. sum() or mean()) should return integers or floats, not bools. - Issue #1664: Fix regression when indexing a record array with a constant index. - PR #1661: Disable AVX on old Linux kernels. - Issue #1636: Allow raising an exception looked up on a module. Version 0.23.1¶ This is a bug-fix release to address several regressions introduced in the 0.23.0 release, and a couple other issues. Fixes: - Issue #1645: CUDA ufuncs were broken in 0.23.0. - Issue #1638: Check tuple sizes when passing a list of tuples. - Issue #1630: Parallel ufunc would keep eating CPU even after finishing under Windows. - Issue #1628: Fix ctypes and cffi tests under Windows with Python 3.5. - Issue #1627: Fix xrange() support. - PR #1611: Rewrite variable liveness analysis. - Issue #1610: Allow nested calls between explicitly-typed ufuncs. - Issue #1593: Fix *args in object mode. Version 0.23.0¶ This release introduces JIT classes using the new @jitclass decorator, allowing user-defined structures for nopython mode. Other improvements and bug fixes are listed below. Improvements: - PR #1609: Speed up some simple math functions by inlining them in their caller - PR #1571: Implement JIT classes - PR #1584: Improve typing of array indexing - PR #1583: Allow printing booleans - PR #1542: Allow negative values in np.reshape() - PR #1560: Support vector and matrix dot product, including np.dot()and the @operator in Python 3.5 - PR #1546: Support field lookup on record arrays and scalars (i.e. array['field']in addition to array.field) - PR #1440: Support the HSA wavebarrier() and activelanepermute_wavewidth() intrinsics - PR #1540: Support np.angle() - PR #1543: Implement CPU multithreaded gufuncs (target=”parallel”) - PR #1551: Allow scalar arguments in np.where(), np.empty_like(). - PR #1516: Add some more examples from NumbaPro - PR #1517: Support np.sinc() Fixes: - Issue #1603: Fix calling a non-cached function from a cached function - Issue #1594: Ensure a list is homogeneous when unboxing - Issue #1595: Replace deprecated use of get_pointer_to_function() - Issue #1586: Allow tests to be run by different users on the same machine - Issue #1587: Make CudaAPIError picklable - Issue #1568: Fix using Numba from inside Visual Studio 2015 - Issue #1559: Fix serializing a jit function referring a renamed module - PR #1508: Let reshape() accept integer argument(s), not just a tuple - Issue #1545: Improve error checking when unboxing list objects - Issue #1538: Fix array broadcasting in CUDA gufuncs - Issue #1526: Fix a reference count handling bug Version 0.22.1¶ This is a bug-fix release to resolve some packaging issues and other problems found in the 0.22.0 release. Fixes: - PR #1515: Include MANIFEST.in in MANIFEST.in so that sdist still works from source tar files. - PR #1518: Fix reference counting bug caused by hidden alias - PR #1519: Fix erroneous assert when passing nopython=True to guvectorize. - PR #1521: Fix cuda.test() Version 0.22.0¶ This release features several highlights: Python 3.5 support, Numpy 1.10 support, Ahead-of-Time compilation of extension modules, additional vectorization features that were previously only available with the proprietary extension NumbaPro, improvements in array indexing. Improvements: - PR #1497: Allow scalar input type instead of size-1 array to @guvectorize - PR #1480: Add distutils support for AOT compilation - PR #1460: Create a new API for Ahead-of-Time (AOT) compilation - PR #1451: Allow passing Python lists to JIT-compiled functions, and reflect mutations on function return - PR #1387: Numpy 1.10 support - PR #1464: Support cffi.FFI.from_buffer() - PR #1437: Propagate errors raised from Numba-compiled ufuncs; also, let “division by zero” and other math errors produce a warning instead of exiting the function early - PR #1445: Support a subset of fancy indexing - PR #1454: Support “out-of-line” CFFI modules - PR #1442: Improve array indexing to support more kinds of basic slicing - PR #1409: Support explicit CUDA memory fences - PR #1435: Add support for vectorize() and guvectorize() with HSA - PR #1432: Implement numpy.nonzero() and numpy.where() - PR #1416: Add support for vectorize() and guvectorize() with CUDA, as originally provided in NumbaPro - PR #1424: Support in-place array operators - PR #1414: Python 3.5 support - PR #1404: Add the parallel ufunc functionality originally provided in NumbaPro - PR #1393: Implement sorting on arrays and lists - PR #1415: Add functions to estimate the occupancy of a CUDA kernel - PR #1360: The JIT cache now stores the compiled object code, yielding even larger speedups. - PR #1402: Fixes for the ARMv7 (armv7l) architecture under Linux - PR #1400: Add the cuda.reduce() decorator originally provided in NumbaPro Fixes: - PR #1483: Allow np.empty_like() and friends on non-contiguous arrays - Issue #1471: Allow caching JIT functions defined in IPython - PR #1457: Fix flat indexing of boolean arrays - PR #1421: Allow calling Numpy ufuncs, without an explicit output, on non-contiguous arrays - Issue #1411: Fix crash when unpacking a tuple containing a Numba-allocated array - Issue #1394: Allow unifying range_state32 and range_state64 - Issue #1373: Fix code generation error on lists of bools Version 0.21.0¶ This release introduces support for AMD’s Heterogeneous System Architecture, which allows memory to be shared directly between the CPU and the GPU. Other major enhancements are support for lists and the introduction of an opt-in compilation cache. Improvements: - PR #1391: Implement print() for CUDA code - PR #1366: Implement integer typing enhancement proposal (NBEP 1) - PR #1380: Support the one-argument type() builtin - PR #1375: Allow boolean evaluation of lists and tuples - PR #1371: Support array.view() in CUDA mode - PR #1369: Support named tuples in nopython mode - PR #1250: Implement numpy.median(). - PR #1289: Make dispatching faster when calling a JIT-compiled function from regular Python - Issue #1226: Improve performance of integer power - PR #1321: Document features supported with CUDA - PR #1345: HSA support - PR #1343: Support lists in nopython mode - PR #1356: Make Numba-allocated memory visible to tracemalloc - PR #1363: Add an environment variable NUMBA_DEBUG_TYPEINFER - PR #1051: Add an opt-in, per-function compilation cache Fixes: - Issue #1372: Some array expressions would fail rewriting when involved the same variable more than once, or a unary operator - Issue #1385: Allow CUDA local arrays to be declared anywhere in a function - Issue #1285: Support datetime64 and timedelta64 in Numpy reduction functions - Issue #1332: Handle the EXTENDED_ARG opcode. - PR #1329: Handle the inoperator in object mode - Issue #1322: Fix augmented slice assignment on Python 2 - PR #1357: Fix slicing with some negative bounds or step values. Version 0.20.0¶ This release updates Numba to use LLVM 3.6 and CUDA 7 for CUDA support. Following the platform deprecation in CUDA 7, Numba’s CUDA feature is no longer supported on 32-bit platforms. The oldest supported version of Windows is Windows 7. Improvements: - Issue #1203: Support indexing ndarray.flat - PR #1200: Migrate cgutils to llvmlite - PR #1190: Support more array methods: .transpose(), .T, .copy(), .reshape(), .view() - PR #1214: Simplify setup.py and avoid manual maintenance - PR #1217: Support datetime64 and timedelta64 constants - PR #1236: Reload environment variables when compiling - PR #1225: Various speed improvements in generated code - PR #1252: Support cmath module in CUDA - PR #1238: Use 32-byte aligned allocator to optimize for AVX - PR #1258: Support numpy.frombuffer() - PR #1274: Use TravisCI container infrastructure for lower wait time - PR #1279: Micro-optimize overload resolution in call dispatch - Issue #1248: Improve error message when return type unification fails Fixes: - Issue #1131: Handling of negative zeros in np.conjugate() and np.arccos() - Issue #1188: Fix slow array return - Issue #1164: Avoid warnings from CUDA context at shutdown - Issue #1229: Respect the writeable flag in arrays - Issue #1244: Fix bug in refcount pruning pass - Issue #1251: Fix partial left-indexing of Fortran contiguous array - Issue #1264: Fix compilation error in array expression - Issue #1254: Fix error when yielding array objects - Issue #1276: Fix nested generator use Version 0.19.2¶ This release fixes the source distribution on pypi. The only change is in the setup.py file. We do not plan to provide a conda package as this release is essentially the same as 0.19.1 for conda users. Version 0.19.1¶ - Issue #1196: - fix double-free segfault due to redundant variable deletion in the Numba IR (#1195) - fix use-after-delete in array expression rewrite pass Version 0.19.0¶ This version introduces memory management in the Numba runtime, allowing to allocate new arrays inside Numba-compiled functions. There is also a rework of the ufunc infrastructure, and an optimization pass to collapse cascading array operations into a single efficient loop. Warning Support for Windows XP and Vista with all compiler targets and support for 32-bit platforms (Win/Mac/Linux) with the CUDA compiler target are deprecated. In the next release of Numba, the oldest version of Windows supported will be Windows 7. CPU compilation will remain supported on 32-bit Linux and Windows platforms. Known issues: - There are some performance regressions in very short running nopythonfunctions due to the additional overhead incurred by memory management. We will work to reduce this overhead in future releases. Features: - Issue #1181: Add a Frequently Asked Questions section to the documentation. - Issue #1162: Support the cumsum()and cumprod()methods on Numpy arrays. - Issue #1152: Support the *argsargument-passing style. - Issue #1147: Allow passing character sequences as arguments to JIT-compiled functions. - Issue #1110: Shortcut deforestation and loop fusion for array expressions. - Issue #1136: Support various Numpy array constructors, for example numpy.zeros() and numpy.zeros_like(). - Issue #1127: Add a CUDA simulator running on the CPU, enabled with the NUMBA_ENABLE_CUDASIM environment variable. - Issue #1086: Allow calling standard Numpy ufuncs without an explicit output array from nopythonfunctions. - Issue #1113: Support keyword arguments when calling numpy.empty() and related functions. - Issue #1108: Support the ctypes.dataattribute of Numpy arrays. - Issue #1077: Memory management for array allocations in nopythonmode. - Issue #1105: Support calling a ctypes function that takes ctypes.py_object parameters. - Issue #1084: Environment variable NUMBA_DISABLE_JIT disables compilation of @jitfunctions, instead calling into the Python interpreter when called. This allows easier debugging of multiple jitted functions. - Issue #927: Allow gufuncs with no output array. - Issue #1097: Support comparisons between tuples. - Issue #1075: Numba-generated ufuncs can now be called from nopythonfunctions. - Issue #1062: @vectorizenow allows omitting the signatures, and will compile the required specializations on the fly (like @jitdoes). - Issue #1027: Support numpy.round(). - Issue #1085: Allow returning a character sequence (as fetched from a structured array) from a JIT-compiled function. Fixes: - Issue #1170: Ensure ndindex(), ndenumerate()and ndarray.flatwork properly inside generators. - Issue #1151: Disallow unpacking of tuples with the wrong size. - Issue #1141: Specify install dependencies in setup.py. - Issue #1106: Loop-lifting would fail when the lifted loop does not produce any output values for the function tail. - Issue #1103: Fix mishandling of some inputs when a JIT-compiled function is called with multiple array layouts. - Issue #1089: Fix range() with large unsigned integers. - Issue #1088: Install entry-point scripts (numba, pycc) from the conda build recipe. - Issue #1081: Constant structured scalars now work properly. - Issue #1080: Fix automatic promotion of booleans to integers. Version 0.18.2¶ Bug fixes: - Issue #1073: Fixes missing template file for HTML annotation - Issue #1074: Fixes CUDA support on Windows machine due to NVVM API mismatch Version 0.18.1¶ Version 0.18.0 is not officially released. This version removes the old deprecated and undocumented argtypes and restype arguments to the @jit decorator. Function signatures should always be passed as the first argument to @jit. Features: - Issue #960: Add inspect_llvm() and inspect_asm() methods to JIT-compiled functions: they output the LLVM IR and the native assembler source of the compiled function, respectively. - Issue #990: Allow passing tuples as arguments to JIT-compiled functions in nopythonmode. - Issue #774: Support two-argument round() in nopythonmode. - Issue #987: Support missing functions from the math module in nopython mode: frexp(), ldexp(), gamma(), lgamma(), erf(), erfc(). - Issue #995: Improve code generation for round() on Python 3. - Issue #981: Support functions from the random and numpy.random modules in nopythonmode. - Issue #979: Add cuda.atomic.max(). - Issue #1006: Improve exception raising and reporting. It is now allowed to raise an exception with an error message in nopythonmode. - Issue #821: Allow ctypes- and cffi-defined functions as arguments to nopythonfunctions. - Issue #901: Allow multiple explicit signatures with @jit. The signatures must be passed in a list, as with @vectorize. - Issue #884: Better error message when a JIT-compiled function is called with the wrong types. - Issue #1010: Simpler and faster CUDA argument marshalling thanks to a refactoring of the data model. - Issue #1018: Support arrays of scalars inside Numpy structured types. - Issue #808: Reduce Numba import time by half. - Issue #1021: Support the buffer protocol in nopythonmode. Buffer-providing objects, such as bytearray, array.arrayor memoryviewsupport array-like operations such as indexing and iterating. Furthermore, some standard attributes on the memoryviewobject are supported. - Issue #1030: Support nested arrays in Numpy structured arrays. - Issue #1033: Implement the inspect_types(), inspect_llvm() and inspect_asm() methods for CUDA kernels. - Issue #1029: Support Numpy structured arrays with CUDA as well. - Issue #1034: Support for generators in nopython and object mode. - Issue #1044: Support default argument values when calling Numba-compiled functions. - Issue #1048: Allow calling Numpy scalar constructors from CUDA functions. - Issue #1047: Allow indexing a multi-dimensional array with a single integer, to take a view. - Issue #1050: Support len() on tuples. - Issue #1011: Revive HTML annotation. Fixes: - Issue #977: Assignment optimization was too aggressive. - Issue #561: One-argument round() now returns an int on Python 3. - Issue #1001: Fix an unlikely bug where two closures with the same name and id() would compile to the same LLVM function name, despite different closure values. - Issue #1006: Fix reference leak when a JIT-compiled function is disposed of. - Issue #1017: Update instructions for CUDA in the README. - Issue #1008: Generate shorter LLVM type names to avoid segfaults with CUDA. - Issue #1005: Properly clean up references when raising an exception from object mode. - Issue #1041: Fix incompatibility between Numba and the third-party library “future”. - Issue #1053: Fix the size attribute of CUDA shared arrays. Version 0.17.0¶ The major focus in this release has been a rewrite of the documentation. The new documentation is better structured and has more detailed coverage of Numba features and APIs. It can be found online at Features: - Issue #895: LLVM can now inline nested function calls in nopythonmode. - Issue #863: CUDA kernels can now infer the types of their arguments (“autojit”-like). - Issue #833: Support numpy.{min,max,argmin,argmax,sum,mean,var,std} in nopythonmode. - Issue #905: Add a nogilargument to the @jitdecorator, to release the GIL in nopythonmode. - Issue #829: Add a identityargument to @vectorizeand @guvectorize, to set the identity value of the ufunc. - Issue #843: Allow indexing 0-d arrays with the empty tuple. - Issue #933: Allow named arguments, not only positional arguments, when calling a Numba-compiled function. - Issue #902: Support numpy.ndenumerate() in nopythonmode. - Issue #950: AVX is now enabled by default except on Sandy Bridge and Ivy Bridge CPUs, where it can produce slower code than SSE. - Issue #956: Support constant arrays of structured type. - Issue #959: Indexing arrays with floating-point numbers isn’t allowed anymore. - Issue #955: Add support for 3D CUDA grids and thread blocks. - Issue #902: Support numpy.ndindex() in nopythonmode. - Issue #951: Numpy number types ( numpy.int8, etc.) can be used as constructors for type conversion in nopythonmode. Fixes: - Issue #889: Fix NUMBA_DUMP_ASSEMBLYfor the CUDA backend. - Issue #903: Fix calling of stdcall functions with ctypes under Windows. - Issue #908: Allow lazy-compiling from several threads at once. - Issue #868: Wrong error message when multiplying a scalar by a non-scalar. - Issue #917: Allow vectorizing with datetime64 and timedelta64 in the signature (only with unit-less values, though, because of a Numpy limitation). - Issue #431: Allow overloading of cuda device function. - Issue #917: Print out errors occurred in object mode ufuncs. - Issue #923: Numba-compiled ufuncs now inherit the name and doc of the original Python function. - Issue #928: Fix boolean return value in nested calls. - Issue #915: @jitcalled with an explicit signature with a mismatching type of arguments now raises an error. - Issue #784: Fix the truth value of NaNs. - Issue #953: Fix using shared memory in more than one function (kernel or device). - Issue #970: Fix an uncommon double to uint64 conversion bug on CentOS5 32-bit (C compiler issue). Version 0.16.0¶ This release contains a major refactor to switch from llvmpy to llvmlite as our code generation backend. The switch is necessary to reconcile different compiler requirements for LLVM 3.5 (needs C++11) and Python extensions (need specific compiler versions on Windows). As a bonus, we have found the use of llvmlite speeds up compilation by a factor of 2! Other Major Changes: - Faster dispatch for numpy structured arrays - Optimized array.flat() - Improved CPU feature selection - Fix constant tuple regression in macro expansion code Known Issues: - AVX code generation is still disabled by default due to performance regressions when operating on misaligned NumPy arrays. We hope to have a workaround in the future. - In extremely rare circumstances, a known issue with LLVM 3.5 code generation can cause an ELF relocation error on 64-bit Linux systems. Version 0.15.1¶ (This was a bug-fix release that superceded version 0.15 before it was announced.) Fixes: - Workaround for missing __ftol2 on Windows XP. - Do not lift loops for compilation that contain break statements. - Fix a bug in loop-lifting when multiple values need to be returned to the enclosing scope. - Handle the loop-lifting case where an accumulator needs to be updated when the loop count is zero. Version 0.15¶ Features: - Support for the Python cmathmodule. (NumPy complex functions were already supported.) - Support for .real, .imag, and .conjugate()` on non-complex numbers. - Add support for math.isfinite()and math.copysign(). - Compatibility mode: If enabled (off by default), a failure to compile in object mode will fall back to using the pure Python implementation of the function. - Experimental support for serializing JIT functions with cloudpickle. - Loop-jitting in object mode now works with loops that modify scalars that are accessed after the loop, such as accumulators. @vectorizefunctions can be compiled in object mode. - Numba can now be built using the Visual C++ Compiler for Python 2.7 on Windows platforms. - CUDA JIT functions can be returned by factory functions with variables in the closure frozen as constants. - Support for “optional” types in nopython mode, which allow Noneto be a valid value. Fixes: - If nopython mode compilation fails for any reason, automatically fall back to object mode (unless nopython=True is passed to @jit) rather than raise an exeception. - Allow function objects to be returned from a function compiled in object mode. - Fix a linking problem that caused slower platform math functions (such as exp()) to be used on Windows, leading to performance regressions against NumPy. min()and max()no longer accept scalars arguments in nopython mode. - Fix handling of ambigous type promotion among several compiled versions of a JIT function. The dispatcher will now compile a new version to resolve the problem. (issue #776) - Fix float32 to uint64 casting bug on 32-bit Linux. - Fix type inference to allow forced casting of return types. - Allow the shape of a 1D cuda.shared.arrayand cuda.local.arrayto be a one-element tuple. - More correct handling of signed zeros. - Add custom implementation of atan2()on Windows to handle special cases properly. - Eliminated race condition in the handling of the pagelocked staging area used when transferring CUDA arrays. - Fix non-deterministic type unification leading to varying performance. (issue #797) Version 0.14¶ Features: - Support for nearly all the Numpy math functions (including comparison, logical, bitwise and some previously missing float functions) in nopython mode. - The Numpy datetime64 and timedelta64 dtypes are supported in nopython mode with Numpy 1.7 and later. - Support for Numpy math functions on complex numbers in nopython mode. - ndarray.sum() is supported in nopython mode. - Better error messages when unsupported types are used in Numpy math functions. - Set NUMBA_WARNINGS=1 in the environment to see which functions are compiled in object mode vs. nopython mode. - Add support for the two-argument pow() builtin function in nopython mode. - New developer documentation describing how Numba works, and how to add new types. - Support for Numpy record arrays on the GPU. (Note: Improper alignment of dtype fields will cause an exception to be raised.) - Slices on GPU device arrays. - GPU objects can be used as Python context managers to select the active device in a block. - GPU device arrays can be bound to a CUDA stream. All subsequent operations (such as memory copies) will be queued on that stream instead of the default. This can prevent unnecessary synchronization with other streams. Fixes: - Generation of AVX instructions has been disabled to avoid performance bugs when calling external math functions that may use SSE instructions, especially on OS X. - JIT functions can be removed by the garbage collector when they are no longer accessible. - Various other reference counting fixes to prevent memory leaks. - Fixed handling of exception when input argument is out of range. - Prevent autojit functions from making unsafe numeric conversions when called with different numeric types. - Fix a compilation error when an unhashable global value is accessed. - Gracefully handle failure to enable faulthandler in the IPython Notebook. - Fix a bug that caused loop lifting to fail if the loop was inside an elseblock. - Fixed a problem with selecting CUDA devices in multithreaded programs on Linux. - The pow()function (and **operation) applied to two integers now returns an integer rather than a float. - Numpy arrays using the object dtype no longer cause an exception in the autojit. - Attempts to write to a global array will cause compilation to fall back to object mode, rather than attempt and fail at nopython mode. range()works with all negative arguments (ex: range(-10, -12, -1)) Version 0.13.4¶ Features: - Setting and deleting attributes in object mode - Added documentation of supported and currently unsupported numpy ufuncs - Assignment to 1-D numpy array slices - Closure variables and functions can be used in object mode - All numeric global values in modules can be used as constants in JIT compiled code - Support for the start argument in enumerate() - Inplace arithmetic operations (+=, -=, etc.) - Direct iteration over a 1D numpy array (e.g. “for x in array: …”) in nopython mode Fixes: - Support for NVIDIA compute capability 5.0 devices (such as the GTX 750) - Vectorize no longer crashes/gives an error when bool_ is used as return type - Return the correct dictionary when globals() is used in JIT functions - Fix crash bug when creating dictionary literals in object - Report more informative error message on import if llvmpy is too old - Temporarily disable pycc –header, which generates incorrect function signatures. Version 0.13.3¶ Features: - Support for enumerate() and zip() in nopython mode - Increased LLVM optimization of JIT functions to -O1, enabling automatic vectorization of compiled code in some cases - Iteration over tuples and unpacking of tuples in nopython mode - Support for dict and set (Python >= 2.7) literals in object mode Fixes: - JIT functions have the same __name__ and __doc__ as the original function. - Numerous improvements to better match the data types and behavior of Python math functions in JIT compiled code on different platforms. - Importing Numba will no longer throw an exception if the CUDA driver is present, but cannot be initialized. - guvectorize now properly supports functions with scalar arguments. - CUDA driver is lazily initialized Version 0.13.2¶ Features: - @vectorize ufunc now can generate SIMD fast path for unit strided array - Added cuda.gridsize - Added preliminary exception handling (raise exception class) Fixes: - UNARY_POSITIVE - Handling of closures and dynamically generated functions - Global None value Version 0.13.1¶ Features: - Initial support for CUDA array slicing Fixes: - Indirectly fixes numbapro when the system has a incompatible CUDA driver - Fix numba.cuda.detect - Export numba.intp and numba.intc Version 0.13¶ Features: - Opensourcing NumbaPro CUDA python support in numba.cuda - Add support for ufunc array broadcasting - Add support for mixed input types for ufuncs - Add support for returning tuple from jitted function Fixes: - Fix store slice bytecode handling for Python2 - Fix inplace subtract - Fix pycc so that correct header is emitted - Allow vectorize to work on functions with jit decorator Version 0.12.1¶ This version fixed many regressions reported by user for the 0.12 release. This release contains a new loop-lifting mechanism that specializes certains loop patterns for nopython mode compilation. This avoid direct support for heap-allocating and other very dynamic operations. Improvements: - Add loop-lifting–jit-ing loops in nopython for object mode code. This allows functions to allocate NumPy arrays and use Python objects, while the tight loops in the function can still be compiled in nopython mode. Any arrays that the tight loop uses should be created before the loop is entered. Fixes: - Add support for majority of “math” module functions - Fix for…else handling - Add support for builtin round() - Fix tenary if…else support - Revive “numba” script - Fix problems with some boolean expressions - Add support for more NumPy ufuncs Version 0.12¶ Version 0.12 contains a big refactor of the compiler. The main objective for this refactor was to simplify the code base to create a better foundation for further work. A secondary objective was to improve the worst case performance to ensure that compiled functions in object mode never run slower than pure Python code (this was a problem in several cases with the old code base). This refactor is still a work in progress and further testing is needed. Main improvements: - Major refactor of compiler for performance and maintenance reasons - Better fallback to object mode when native mode fails - Improved worst case performance in object mode The public interface of numba has been slightly changed. The idea is to make it cleaner and more rational: - jit decorator has been modified, so that it can be called without a signature. When called without a signature, it behaves as the old autojit. Autojit has been deprecated in favour of this approach. - Jitted functions can now be overloaded. - Added a “njit” decorator that behaves like “jit” decorator with nopython=True. - The numba.vectorize namespace is gone. The vectorize decorator will be in the main numba namespace. - Added a guvectorize decorator in the main numba namespace. It is similar to numba.vectorize, but takes a dimension signature. It generates gufuncs. This is a replacement for the GUVectorize gufunc factory which has been deprecated. Main regressions (will be fixed in a future release): - Creating new NumPy arrays is not supported in nopython mode - Returning NumPy arrays is not supported in nopython mode - NumPy array slicing is not supported in nopython mode - lists and tuples are not supported in nopython mode - string, datetime, cdecimal, and struct types are not implemented yet - Extension types (classes) are not supported in nopython mode - Closures are not supported - Raise keyword is not supported - Recursion is not support in nopython mode Version 0.10¶ - Annotation tool (./bin/numba –annotate –fancy) (thanks to Jay Bourque) - Open sourced prange - Support for raise statement - Pluggable array representation - Support for enumerate and zip (thanks to Eugene Toder) - Better string formatting support (thanks to Eugene Toder) - Builtins min(), max() and bool() (thanks to Eugene Toder) - Fix some code reloading issues (thanks to Björn Linse) - Recognize NumPy scalar objects (thanks to Björn Linse) Version 0.8¶ - Support for autojit classes - Inheritance not yet supported - Python 3 support for pycc - Allow retrieval of ctypes function wrapper - And hence support retrieval of a pointer to the function - Fixed a memory leak of array slicing views Version 0.7.2¶ - Official Python 3 support (python 3.2 and 3.3) - Support for intrinsics and instructions - Various bug fixes (see) Version 0.7¶ - Open sourced single-threaded ufunc vectorizer - Open sourced NumPy array expression compilation - Open sourced fast NumPy array slicing - Experimental Python 3 support - Support for typed containers - typed lists and tuples - Support for iteration over objects - Support object comparisons - Preliminary CFFI support - Jit calls to CFFI functions (passed into autojit functions) - TODO: Recognize ffi_lib.my_func attributes - Improved support for ctypes - Allow declaring extension attribute types as through class attributes - Support for type casting in Python - Get the same semantics with or without numba compilation - Support for recursion - For jit methods and extension classes - Allow jit functions as C callbacks - Friendlier error reporting - Internal improvements - A variety of bug fixes Version 0.6¶ - Python 2.6 support - Programmable typing - Allow users to add type inference for external code - Better NumPy type inference - outer, inner, dot, vdot, tensordot, nonzero, where, binary ufuncs + methods (reduce, accumulate, reduceat, outer) - Type based alias analysis - Support for strict aliasing - Much faster autojit dispatch when calling from Python - Faster numerical loops through data and stride pre-loading - Integral overflow and underflow checking for conversions from objects - Make Meta dependency optional Version 0.5¶ - SSA-based type inference - Allows variable reuse - Allow referring to variables before lexical definition - Support multiple comparisons - Support for template types - List comprehensions - Support for pointers - Many bug fixes - Added user documentation Version 0.3¶ - Changed default compilation approach to ast - Added support for cross-module linking - Added support for closures (can jit inner functions and return them) (see examples/closure.py) - Added support for dtype structures (can access elements of structure with attribute access) (see examples/structures.py) - Added support for extension types (numba classes) (see examples/numbaclasses.py) - Added support for general Python code (use nopython to raise an error if Python C-API is used to avoid unexpected slowness because of lack of implementation defaulting to generic Python) - Fixed many bugs - Added support to detect math operations. - Added with python and with nopython contexts - Added more examples Many features need to be documented still. Look at examples and tests for more information. Version 0.2¶ - Added an ast approach to compilation - Removed d, f, i, b from numba namespace (use f8, f4, i4, b1) - Changed function to autojit2 - Added autojit function to decorate calls to the function and use types of the variable to create compiled versions. - changed keyword arguments to jit and autojit functions to restype and argtypes to be consistent with ctypes module. - Added pycc – a python to shared library compiler
https://numba.readthedocs.io/en/0.52.0/release-notes.html
CC-MAIN-2021-10
refinedweb
27,399
55.84
I've been looking through the JBoss code for a bit and could not find the answer. I've looked at the object APIs, and could not figure it out. I'd like to do it in a non-JBoss specific manner, if possible. Any suggestions, please? Thanks, Greg. There is no portable way to do this, MCFs aren't necessarily controlled by JMX. There is a "standard name" in the JSR77 namespace, that looks like a JMX object name but this isn't necessarily a JMX mbean and in jboss it isn't the same MBean as the real deployment. It will probably take at least until j2ee1.5 before JMX is standardized. I doubt it will specified down to level you want. Regards, Adrian
https://developer.jboss.org/thread/71808
CC-MAIN-2018-13
refinedweb
126
75.1
import "go.chromium.org/luci/appengine/tq" Package tq implements simple routing layer for task queue tasks. Retry is an error tag used to indicate that the handler wants the task to be redelivered later. See Handler doc for more details. RequestHeaders returns the special task-queue HTTP request headers for the current task handler. Returns an error if called from outside of a task handler. type Dispatcher struct { BaseURL string // URL prefix for all URLs, "/internal/tasks/" by default // contains filtered or unexported fields } Dispatcher submits and handles task queue tasks. AddTask submits given tasks to an appropriate task queue. It means, at some later time in some other GAE process, callbacks registered as handlers for corresponding proto types will be called. Add, ErrTaskAlreadyAdded is not considered an error. DeleteTask deletes the specified tasks from their queues. Delete, attempts to delete an unknown or tombstoned task are not considered errors. func (d *Dispatcher) GetQueues() []string GetQueues returns the names of task queues known to the dispatcher. func (d *Dispatcher) InstallRoutes(r *router.Router, mw router.MiddlewareChain) InstallRoutes installs appropriate HTTP routes in the router. Must be called only after all task handlers are registered! func (d *Dispatcher) Internals() interface{} Internals is used by tqtesting package and must not be used directly. For that reason it returns opaque interface type, to curb the curiosity. We do this to avoid linking testing implementation into production binaries. We make testing live in a different package and use this secret back door API to talk to Dispatcher. func (d *Dispatcher) RegisterTask(prototype proto.Message, cb Handler, queue string, opts *taskqueue.RetryOptions) RegisterTask tells the dispatcher that tasks of given proto type should be handled by the given handler and routed through the given task queue. 'prototype' should be a pointer to some concrete proto message. It will be used only for its type signature. Intended to be called during process startup. Panics if such message has already been registered. Handler is called to handle one enqueued task. The passed context is produced by a middleware chain installed with InstallHandlers. In addition it carries task queue request headers, accessible through RequestHeaders(ctx) function. They are passed implicitly via the context to avoid complicating Handler signature for a feature that most callers aren't going to use. struct { // Payload is task's payload as well as indicator of its type. // // Tasks are routed based on type of the payload message, see RegisterTask. Payload proto.Message // NamePrefix, if not empty, is a string that will be prefixed to the task's // name. Characters in NamePrefix must be appropriate task queue name // characters. NamePrefix can be useful because the Task Queue system allows // users to search for tasks by prefix. // // Lexicographically close names can cause hot spots in the Task Queues // backend. If NamePrefix is specified, users should try and ensure that // it is friendly to sharding (e.g., begins with a hash string). // // Setting NamePrefix and/or DeduplicationKey will result in a named task // being generated. This task can be cancelled using DeleteTask. NamePrefix string // DeduplicationKey is optional unique key of the task. // // If a task of a given proto type with a given key has already been enqueued // recently, this task will be silently ignored. // // Such tasks can only be used outside of transactions. // // Setting NamePrefix and/or DeduplicationKey will result in a named task // being generated. This task can be cancelled using DeleteTask. DeduplicationKey string // Title is optional string that identifies the task in HTTP logs. // // It will show up as a suffix in task handler URL. It exists exclusively to // simplify reading HTTP logs. It serves no other purpose! In particular, // it is NOT a task name. // // Handlers won't ever see it. Pass all information through the task body. Title string // Delay specifies the duration the task queue service must wait before // executing the task. // // Either Delay or ETA may be set, but not both. Delay time.Duration // ETA specifies the earliest time a task may be executed. // // Either Delay or ETA may be set, but not both. ETA time.Time // Retry options for this task. // // If given, overrides default options set when this task was registered. RetryOptions *taskqueue.RetryOptions } Task contains task body and additional parameters that influence how it is routed. Name generates and returns the task's name. If the task is not a named task (doesn't have NamePrefix or DeduplicationKey set), this will return an empty string. Package tq imports 21 packages (graph) and is imported by 19 packages. Updated 2020-07-02. Refresh now. Tools for package owners.
https://godoc.org/go.chromium.org/luci/appengine/tq
CC-MAIN-2020-29
refinedweb
762
59.8
In this section we will discussed about the StringWriter in Java. java.io.StringWriter writes the String to the output stream. To write the string this character stream collects the string into a string buffer and then constructed a string. StringWriter provides the two constructors to create objects. Constructor Detail Method Detail Example An example is being given here which will demonstrate you about how to use the StringWriter in the Java program. In this example I have created a Java class named JavaStingWriterExample.java where created an object of StringWriter using which a string will be write to the stream. Source Code JavaStringWriterExample.java import java.io.StringWriter; import java.io.IOException; public class JavaStringWriterExample { public static void main(String args[]) { String str = "Java StringWriter Example"; try { StringWriter sw = new StringWriter(); sw.write(str); StringBuffer sb = new StringBuffer(); sb = sw.getBuffer(); System.out.println(); System.out.println("StringBuffer = "+sb); System.out.println("String written by StringWriter = "+ sw); System.out.println(); sw.close(); } catch (IOException e) { System.out.println(e); } } } Output When you will execute the above example you will get the output as follows : StringWriter Post your Comment
http://www.roseindia.net/java/example/java/io/stringwriter.shtml
CC-MAIN-2016-44
refinedweb
189
52.66
Find last sunday of the month in Python If you want to use cron to achieve this function, I think it is very annoying, So I choose Python import sys import datetime import calendar def checkDay(): year = datetime.date.today().year month= datetime.date.today().month day = datetime.date.today().day today= str(year) + str(month) + str(day) last_sunday = max(week[-1] for week in calendar.monthcalendar(year, month)) last_sunday = str(year) + str(month) + str(last_sunday) if (today == last_sunday): print('Today is last sunday, starting sync now.') else: print('Today is not last sunday, stopping sync now.', today, last_sunday) quit() checkDay()
https://ovvo.cc/find-last-sunday-of-the-month-in-python/
CC-MAIN-2021-31
refinedweb
101
51.44
- Issued: - 2018-04-12 - Updated: - 2018-04-12 RHBA-2018:1106 - Bug Fix Advisory Synopsis OpenShift Container Platform 3.6 and 3.5 bug fix update Type/Severity Bug Fix Advisory Topic Red Hat OpenShift Container Platform releases 3.6.173.0.112 and 3.5.5.31.66.6.173.0.112 and 3.5.5.31.66. See the following advisory for the container images for this release: Space precludes documenting all of the bug fixes in this advisory. See the following Release Notes documentation, which will be updated shortly for these releases, for details about these changes: All OpenShift Container Platform 3.6, and 3.5 users are advised to upgrade to these updated packages and images. Solution Before applying this update, make sure all previously released errata relevant to your system have been applied. For OpenShift Container Platform 3.6, see the following documentation, which will be updated shortly for release 3.6.173.0.112, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update: For OpenShift Container Platform 3.5, see the following documentation, which will be updated shortly for release 3.5.5.31.66,.6 x86_64 - Red Hat OpenShift Container Platform 3.5 x86_64 Fixes - BZ - 1461374 - Failed to attach the cinder volume after upgrade - BZ - 1468199 - The PVC AccessModes of mediawiki apb is invalid - BZ - 1471730 - Mediawiki123 can't access after binding to postgresql - BZ - 1499762 - .all index is missing on kibana UI - BZ - 1505959 - Fluentd not connected to Elasticsearch - BZ - 1506286 - [3.6] missing messages + [error]: record cannot use elasticsearch index name type project_full: record is missing kubernetes.namespace_id field: - BZ - 1507123 - etcd migrate v2 -> v3 playbook fails - Cannot link, file exists at destination - BZ - 1509289 - [3.5] eviction manager sometimes evicts all pods - BZ - 1511294 - Containerized uninstallation playbook does not clean up LB configuration - BZ - 1513284 - [3.5.1]project indices can not be found on kibna UI - BZ - 1514110 - [3.5] Aggregated Logging replacing all log levels with '3' and '6' after upgrade to 3.5 from 3.4 - BZ - 1517994 - Installing CNS with osm_default_node_selector fails - BZ - 1518020 - etcd v2 to v3 migration updates named_certificates when it shouldn't be - BZ - 1525415 - A lot of error messages in fluentd pod logs for deleted namespaces - BZ - 1526485 - [3.5] Upgrade failed for AnsibleUndefinedVariable: 'l_bind_docker_reg_auth' is undefined - BZ - 1527389 - [3.6] Nodes becomes NotReady, when status update connection is dropped/severed to master, causing node to wait on 15min default net/http timeout before trying again. - BZ - 1527973 - Backport to 3.6 fix for failed to provision volume for claim test/pv0001-claim with StorageClass cinder (re-authentication fails) - BZ - 1528370 - Need SWEET32 fix backported to 3.6 - BZ - 1528383 - update openshift_to_ovs_version for 3.6 branch - BZ - 1530367 - atomic-openshift-master-controllers crashing due to apparent StatefulSet isRunningAndReady issue - BZ - 1533181 - [3.6] Failed docker builds leave temporary containers on node - BZ - 1533938 - Jenkins API authentication (with a serviceaccount) fails until the first web access (then it works) - BZ - 1534775 - oadm diagnostics NetworkCheck fails to schedule pods if there is a default node selector in the master-config - BZ - 1536189 - deploymentconfig controller creates event with large oldReplicas - BZ - 1537120 - Invalid request Client state could not be verified - BZ - 1537344 - docker_image_availability should honor custom registry-console locations - BZ - 1538431 - Links will be broken and redirect request to homepage while pod's replicas more than 11 in any application - BZ - 1538778 - [3.6] Installer failed to configure AWS cloudprovider for EC2 C5 instance - BZ - 1538895 - [3.6] Invalid entries in namedCertificates when using openshift_master_named_certificates - BZ - 1539091 - [3.6] Setting multiple values for OPTIONS with containerized installation fails to start node - BZ - 1539150 - [3.6] docker_image_availability check does not respect override variables for containerized components - BZ - 1539855 - Client side rate limiting makes migrations excessively long for large clusters - BZ - 1539889 - [starter-ca-central-1] errors during migration of storage on 3.8 - BZ - 1542166 - eviction-soft never triggered because grace-period counter is reset - BZ - 1543360 - [3.6] Updating trigger with oc set trigger --from-image points to unextected namespace when using "-n" - BZ - 1543402 - A-MQ Components not visible in Java Console when using IE11 browser - BZ - 1543435 - A-MQ Components not visible in Java Console when using IE11 browser - BZ - 1543748 - [3.5] Install fails when Host name has capital letter assigned to it - BZ - 1543749 - [3.6] Install fails when Host name has capital letter assigned to it - BZ - 1544200 - Quick installation failed by "Requested profile 'atomic-guest' doesn't exist." - BZ - 1544395 - [GSS] Migrating etcd Data: v2 to v3 playbook issues (if missing certificates) - BZ - 1544399 - [GSS] Migrating etcd Data: v2 to v3 playbook issues (openshift.master.cluster_method missing) - BZ - 1544737 - first containerized etcd not upgraded to latest image when migrating etcd v2-> v3 - BZ - 1545089 - Cannot add Roles for the user after adding few roles for the user in dashboard - BZ - 1545907 - SDN traffic being forced through the proxy - BZ - 1547347 - Kibana page displays "OPENSHIFT ORIGIN" in OCP - BZ - 1547688 - [3.6] exception output while using view archive link in pod log for default project - BZ - 1550470 - [3.6] Master api hang when 1 of master/etcd down - BZ - 1554707 - Fail to migrate etcd v2 to v3 due to undefined variable openshift_ca_host - BZ - 1556838 - [3.5] Mounting file in a subpath fails if file was created in initContainer - BZ - 1556897 - Duplicate elasticsearch entries increase as namespaces increase (constant message rate) - BZ - 1556936 - After etcd v2 to v3 migration, masters are restarted before persisting config changes to use storage-backend etcd3 - BZ - 1564182 - [3.6] Upgrade playbook fails with TemplateRuntimeError: no test named 'equalto' - BZ - 1565077 - Unknown filter plugin elasticsearch_genid in fluentd pod CVEs (none) References (none) Red Hat OpenShift Container Platform 3.6 Red Hat OpenShift Container Platform 3.5 The Red Hat security contact is secalert@redhat.com. More contact details at.
https://access.redhat.com/errata/RHBA-2018:1106
CC-MAIN-2021-10
refinedweb
981
52.8
I recently started reading a very interesting article tutorial series explaining how databases work internally. As developers we often work with technologies without fully understanding and appreciating the internals of those technologies. Since this article series is not only about explaining databases theoretically but also about implementing a toy SQLite clone, I decided to follow along and write my own implementation in Swift. Because why not? Overview The first article in the series gives a high level overview of architecture of a database. Front-end A database consists of a front-end and a back-end. The front-end is made of the following components: Tokenizer (input: SQL query, output: individual tokens) Parser (input: tokens, output: parse tree or abstract syntax tree) Code Generator (input: tree representation, output: VM byte code) Back-end The back-end consist of the following components: Virtual Machine (input: Byte code, output: B-Tree instructions) B-Tree (input B-Tree instructions output: pager commands) Pager (input: pager commands, output: pages) OS-Interface B-trees are used to store database tables and indexes. Each node in the B-tree is one page in length. The B-trees are responsible for retrieving pages from disk and writing it back there by issuing commands to the pager. Apart from disk I/O the pager also does caching of recently accessed pages. the OS interface is simply the underlying OS and the facilities provided by it for tasks such as file I/O etc. REPL In this first part we'll get started with writing a very simple REPL for our database. When starting sqlite from command line you get a prompt where you can enter commands. The REPL reads the line and depending on the command that was given takes an action We start with a simple REPL that only knows the .exit command. import Foundation func printPrompt() { print("db >") } func readInput() -> String { if let line = readLine() { return line } else { return "" } } enum EXIT: Int32 { case EXIT_SUCCESS = 0 case EXIT_FAILURE } while(true){ printPrompt() let input = readInput(); if input == ".exit" { exit(EXIT.EXIT_SUCCESS.rawValue); } else { print("Unrecognized command \(input). \n") } } The while loop at the bottom is an infinite loop that prints the prompt "db >" and waits for user input to process. The printPrompt() and readInput() functions and the EXIT enum type are fairly self-explanatory. If the received input is .exit the program terminates with a success error code, else it tells the user that the given input is unrecognized, as we're not yet able to recognize any commands other than .exit. That's it for the first part of my copycat How databases work series. Make sure to check out the original article series for an implementation in C and more in-depth explanations.
https://celsiusnotes.com/how-databases-work-part-1/
CC-MAIN-2019-22
refinedweb
457
51.58
We're going to pick up where we left off at the end of the exploration and define a linear model with two independent variables determining the dependent variable, Interest Rate. Our investigation is now defined as: Investigate FICO Score and Loan Amount as predictors of Interest Rate for the Lending Club sample of 2,500 loans. We use Multivariate Linear Regression to model Interest Rate variance with FICO Score and Loan Amount using: $$InterestRate = a_0 + a_1 * FICOScore + a_2 * LoanAmount$$ We're going to use modeling software to generate the model coefficients $a_0$, $a_1$ and $a_2$ and then some error estimates that we'll only touch upon lightly at this point. %pylab inline import pylab as pl import numpy as np #from sklearn import datasets, linear_model import pandas as pd import statsmodels.api as sm # import the cleaned up dataset df = pd.read_csv('../datasets/loanf.csv') intrate = df['Interest.Rate'] loanamt = df['Loan.Amount'] fico = df['FICO.Score'] # reshape the data from a pandas Series to columns # the dependent variable y = np.matrix(intrate).transpose() # the independent variables shaped as columns x1 = np.matrix(fico).transpose() x2 = np.matrix(loanamt).transpose() # put the two columns together to create an input matrix # if we had n independent variables we would have n columns here x = np.column_stack([x1,x2]) # create a linear model and fit it to the data X = sm.add_constant(x) model = sm.OLS(y,X) f = model.fit() print 'Coefficients: ', f.params[0:2] print 'Intercept: ', f.params[2] print 'P-Values: ', f.pvalues print 'R-Squared: ', f.rsquared Populating the interactive namespace from numpy and matplotlib Coefficients: [ 72.88279832 -0.08844242] Intercept: 0.000210747768548 P-Values: [ 0.00000000e+000 0.00000000e+000 5.96972978e-203] R-Squared: 0.656632624649 So we have a lot of numbers here and we're going to understand some of them. Coefficients: contains $a_1$ and $a_2$ respectively. Intercept: is the $a_0$. How good are these numbers, how reliable? We need to have some idea. After all we are estimating. We're going to learn a very simple pragmatic way to use a couple of these. Let's look at the second two numbers. We are going to talk loosely here so as to give some flavor of why these are important. But this is by no means a formal explanation. P-Values are probabilities. Informally, each number represents a probability that the respective coefficient we have is a really bad one. To be fairly confident we want this probability to be close to zero. The convention is it needs to be 0.05 or less. For now suffice it to say that if we have this true for each of our coefficients then we have good confidence in the model. If one or other of the coefficients is equal to or greater than 0.05 then we have less confidence in that particular dimension being useful in modeling and predicting. $R$-$squared$ or $R^2$ is a measure of how much of the variance in the data is captured by the model. What does this mean? For now let's understand this as a measure of how well the model captures the spread of the observed values not just the average trend. R is a coefficient of correlation between the independent variables and the dependent variable - i.e. how much the Y depends on the separate X's. R lies between -1 and 1, so $R^2$ lies between 0 and 1. A high $R^2$ would be close to 1.0 a low one close to 0. The value we have, 0.65, is a reasonably good one. It suggests an R with absolute value in the neighborhood of 0.8. The details of these error estimates deserve a separate discussion which we defer until another time. In summary we have a linear multivariate regression model for Interest Rate based on FICO score and Loan Amount which is well described by the parameters above. from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling()
http://nbviewer.jupyter.org/github/nborwankar/LearnDataScience/blob/master/notebooks/A3.%20Linear%20Regression%20-%20Analysis.ipynb
CC-MAIN-2018-26
refinedweb
682
60.51
dev_test.dart dev_test brings (back) the solo and skip features on top of the test package It is a layer on top of the test package that adds (back) the solo and skip feature so that you can run/debug a filtered set of tests from the IDE without having to use pub run test -n xxx. It remains however compatible the existing test package and tests can be run using pub run test. Make sure you have both dev_test and test as dependencies in your pubspec.yaml solo_test, solo_group, skip_test, skip_group are marked as deprecated so that you don't commit code (check the dart analysis result) that might skip many needed tests. Also running tests will report if any tests were skipped. Usage Your pubspec.yaml should contain the following dev_depencencies: dev_dependencies: test: any dev_test: any In your xxxx_test.dart files replace import 'package:test/test.dart'; with import 'package:dev_test/test.dart'; solo_test, solo_group, skip_test, skip_group are marked as deprecated so that you don't commit code that might skip many needed tests. testDescriptions add information about the current running test (list of String naming the current group and test) devTestRun will be optionally needed if you have a mix of test and dev_test to make sure the declared tests or groups belongs to the correct group they are declared in Testing Testing with dartdevc pub serve test --web-compiler=dartdevc --port=8079 pub run test -p chrome --pub-serve=8079
https://pub.dev/documentation/dev_test/latest/
CC-MAIN-2020-34
refinedweb
244
59.13
A web crawler is a hard-working bot to gather information or index the pages on the Internet. It starts at some seeds URLs and finds every hyperlink on each page, and then crawler will visit those hyperlinks recursively. 1. Choose an Ideal Programming Language Here is a ranking of popular languages on developing web crawlers (based on result numbers of relative repositories host on Github on February, 2013): Python or Ruby probably is a wise choice, the mainly speed limit of web crawler is network latency not CPU, so choose Python or Ruby as a language to develop a web crawler will make life easier. Python provide some standard libraries, they are very useful, such like urllib, httplib and regex, those libraries can handle lots of work. Python also has plenty of valuable third-party libraries worth a try: scrapy, a web scraping framework. urllib3, a Python HTTP library with thread-safe connection pooling, file post support. greenlet, a Lightweight concurrent programming framework. twisted, an event-driven networking engine. 2. Reading Some Simple Open-source Projects You need to figure out how exactly does a crawler works. Here is a very simple crawler written in Python, in 10 lines of code. import re, urllib crawled_urls = set() def crawl(url): for new_url in re.findall('''href=["'](.[^"']+)["']''', urllib.urlopen(url).read()): if new_url not in crawled_urls: print new_url crawled_urls.add(new_url) if __name__ == "__main__": url = '' crawl(url) Crawler usually needs to keep track of which URLs need to be crawled, and which URLs has already crawled (to avoid the infinite loop). Other simple projects: Python Crawler, a very simple crawler using Berkeley DB to store results. pholcidae, a tiny Python module allows you to write your own crawl spider fast and easy. 3.Choosing the Right Data Structure Choosing a proper data structure will make your crawler efficiently. Queue or Stack is a good choice to store the URLs need be crawled, Hash table or R-B tree seems proper for tracking the crawled URLs, it provide a fast speed to search. Search Time Complexity: Hash table O(1), R-B Tree O(log n) But what if your crawler needs to deal with tons of URLs your memory is not enough? Try to store the checksum of URL string, if it still not enough, you may need to use the Cache algorithms (such like LRU) to dump some URLs into the disk. 4. Multithreading and Asynchronous If you crawling sites from different servers, using multithreading or asynchronous mechanism will save you lots of time. Remember keep your crawler thread-safe, you need a thread-safe queue to share the results and a thread controller to handle threads. Asynchronous is a event-based mechanism will make your crawler enter a while loop, when an events triggers (some resources become available), your crawler will wake up to do deal with this event (usually by execute callback function), Asynchronous can improve throughput, latency of your crawler. Related Resources: How to write a multi-threaded webcrawler, Andreas Hess 5. HTTP Persistent Connections Every time sends an HTTP request you need to open a TCP socket connection, when you finish request, this socket will be closed. When you crawl lots of pages on a same server, you will open and close the socket again and again. The overhead cost is quite a big problem. Connection: Keep-Alive Use this header in your HTTP request to tell the server your client support keep-alive. Your code also should be modified accordingly. 6. Efficient Regular Expressions You should really figure out how the regex works, a good regex really makes a difference in performance. When your web crawlers parsing the information of the HTTP response, the same regex will execute frequently. Compile a regex need little more time in the beginning, but it will run faster when you use it. Notice if you are using Python (or .NET), it will automatically compile and cache the regexs, but it may still be worthwhile to manually do it, you can give it a proper name after compiling a regex, it will make your code more readable. If you want parser even faster, you probably need to write a parser by yourself. Related Resources: Mastering Regular Expressions, Third Edition by Jeffrey Friedl. Performance of Greedy vs. Lazy Regex Quantifiers, Steven Levithan Optimizing regular expressions in Java, Cristian Mocanu 2 thoughts on “Practical Tips on Writing an Effective Web Crawler” 好文章支持一下~ 受教啊技术牛~
http://rmmod.com/effective-web-crawler/
CC-MAIN-2020-24
refinedweb
741
68.7
A port of log4js to node.js Available items The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:. The full documentation is available here. There have been a few changes between log4js 1.x and 2.x (and 0.x too). You should probably read this migration guide if things aren't working. Out of the box it supports the following features: Optional appenders are available: Having problems? Jump on the slack channel, or create an issue. If you want to help out with the development, the slack channel is a good place to go as well. npm install log4js Minimalist version: var log4js = require("log4js"); var logger = log4js.getLogger(); logger.level = "debug"; logger.debug("Some debug messages"); By default, log4js will not output any logs (so that it can safely be used in libraries). The levelfor the defaultcategory is set to OFF. To enable logs, set the level (as in the example). This will then output to stdout with the coloured layout (thanks to masylum), so for the above you would see: [2010-01-17 11:43:37.987] [DEBUG] [default] - Some debug messages See example.js for a full example, but here's a snippet (also in examples/fromreadme.js): const log4js = require("log4js"); log4js.configure({ appenders: { cheese: { type: "file", filename: "cheese.log" } }, categories: { default: { appenders: ["cheese"], level: "error" } } }); const logger = log4js.getLogger("cheese"); logger.trace("Entering cheese testing"); logger.debug("Got cheese."); logger.info("Cheese is Comté."); logger.warn("Cheese is quite smelly."); logger.error("Cheese is too ripe!"); logger.fatal("Cheese was breeding ground for listeria."); Output (in cheese.log): [2010-01-17 11:43:37.987] [ERROR] cheese - Cheese is too ripe! [2010-01-17 11:43:37.990] [FATAL] cheese - Cheese was breeding ground for listeria. If you're writing a library and would like to include support for log4js, without introducing a dependency headache for your users, take a look at log4js-api. There's also an example application. import { configure, getLogger } from "log4js"; configure("./filename"); const logger = getLogger(); logger.level = "debug"; logger.debug("Some debug messages"); configure({ appenders: { cheese: { type: "file", filename: "cheese.log" } }, categories: { default: { appenders: ["cheese"], level: "error" } } }); We're always looking for people to help out. Jump on slack and discuss what you want to do. Also, take a look at the rules before submitting a pull request. The original log4js was distributed under the Apache 2.0 License, and so is this. I've tried to keep the original copyright and author credits in place, except in sections that I have rewritten extensively.
https://xscode.com/log4js-node/log4js-node
CC-MAIN-2020-45
refinedweb
443
62.95
What does it take to support several programming languages within one environment? .NET, which has taken language interoperability to new heights, shows that it's possiblebut only with the right design, the right infrastructure, and appropriate effort from both compiler writers and programmers. In this article, I'd like to go deeper than what I've seen published on the topic, to elucidate what it takes to provide true language openness. The experience that my colleagues have accumulated over the last three years of working to port Eiffel on .NET, as well as the countless discussions we've had with other .NET language implementers, informs this discussion. Who Needs More Than One Language? Let's start with the impolite question: Should one really care about multilanguage support? When this feature was announced at .NET's July 2000 debut, Microsoft's competitors sneered that it wasn't anything anyone needed. I've heard multilanguage development dismissed, or at least questioned, on the argument that most projects simply choose one language and stay with it. But that argument doesn't really address the issue. For one thing, it sounds too much like asserting, from personal observation, that people in Singapore don't like skiing. Lack of opportunity doesn't imply lack of desire or need. Before .NET, the effort required to interface modules from multiple languages was enough to make many people stick to just one; but, with an easy way to combine languages seamlessly and effortlessly, they mayas early experience with .NET suggestsstart to appreciate their newfound freedom to mix and match languages. Even more significant is the matter of libraries. Whether your project uses one language or more, it can take advantage of reusable libraries, whose components may have originated in different source languages. Here, interoperability means that you can use whatever components best suit your needs, regardless of creed or language of origin. This ability to mix languages offers great promise for the future of programming languages, as the practical advance of new language designs has been hindered by the library issue: Though you may have conceived the best language in the world, implemented an optimal compiler and provided brilliant tools, you still might not get the users you deserve because you can't match the wealth of reusable components that other languages are able to provide, merely because they've been around longer. Building bridges to these languages helps, but it's an endless effort if you have to do it separately for each one. In recent years, this library compatibility issue may have been the major impediment to the spread of new language ideas, regardless of their intrinsic value. Language interoperability can overturn this obstacle. Under .NET, as long as your language implementation satisfies the basic interoperability rules of the environment (as explained in the following examples), you can take advantage of components written in any other language whose implementers have adhered to the same rules. That still means some work for compiler writers, but it's work they must do once for their languagenot once for each language with which they want to interface.. Everyone will benefit, even the Java community: Now that there's competition again, new constructs aresurprise!again being considered for Java; one hears noises, for example, about Sun finally introducing genericity sometime in the current millennium. Such are the virtues of openness and competition.. Language Operability at Work Multilanguage communication techniques are nothing new. For some time, Eiffel has included an "external" mechanism for calling out to C and other languages, and a call-in mechanism known as Cecil (which is similar to the Java Native Methods Interface). But all this only addresses calls.NET goes much further: - A routine written in a language L1 may call another routine written in a different language L2. - A module in L1 may declare a variable whose type is a class declared in L2, and then call the corresponding L2 routines on that variable. - If both languages are object oriented, a class in L1 can inherit from a class in L2. - Exceptions triggered by a routine written in L1 and not handled on the L1 side will be passed to the caller, whichif written in L2will process it using L2's own exception-handling mechanism. - During a debugging session, you may move freely and seamlessly across modules written in L1 and L2. I don't know about you, but I've never seen anything coming even close to this level of interoperability. Affirmative Action Let's examine how .NET's language interoperation works. Here's the beginning of an ASP.NET page (from an example at dotnet.eiffel.com). The associated system is written mainly in Eiffel, but you wouldn't guess this from the page text; as stated by the ASP.NET PAGE LANGUAGE directive, the program code on the page itself, introduced by <SCRIPT RUNAT="SERVER">, is in C#: <%@ Assembly /* Start of C# code */ Registrar conference_registrar; bool registered; String error_message; void Page_Init(Object Source, EventArgs E) { conference_registrar = new Registrar(); registrar.start(); ... More C# code ... } ... More HTML ... The first C# line is the declaration of a C# variable called conference_registrar, of type REGISTRAR. On the subsequent lines, we create an instance of that class through a new expression, and assign it to conference_registrar; and we call the procedure start on the resulting object. Presumably, REGISTRAR is just some C# class in this system. Presume not. Class REGISTRAR is an Eiffel class. The only C# code in this example application is on the ASP.NET page, and consists of only a few more lines than shown above; its task is merely to read the text entered into the various fields of the page by a Web site visitor and to pass it on, through the conference_registrar object, to the rest of the systemthe part written in Eiffel that does the actual processing. Nothing in the above example (or the rest of the ASP.NET page) mentions Eiffel. REGISTRAR is not declared as an Eiffel class, or a class in any specific language: It's simply used as a class. The expression new REGISTRAR() that creates an instance of the class might look to the unsuspecting C# programmer like a C# creation, but in fact it calls the default creation procedure (constructor) of the Eiffel class. Not that this makes any difference at the level of the Common Language Runtime: At execution time, we don't have C# objects, Eiffel objects or Visual Basic objects; we have .NET citizens with no distinction of race, religion or language origin. In the previous code sample, if we don't tell the runtime that REGISTRAR is an Eiffel class, how is it going to find that class? Simple: namespaces. Here's the beginning of the Eiffel class text of REGISTRAR: indexing description: "[ Registration services for a conference; include adding new registrants and new registrations. ]" dotnet_name: "Conference_registration.REGISTRAR" class REGISTRAR inherit WEB_SERVICE create start feature - Initialization start is - Set empty error message. do set_last_operation_successful (True) set_last_error_message ("No Error") set_last_registrant_identifier (-1) end ... Other features ... The line preceded by dotnet_name says: "To the rest of the .NET world, this class shall be part of the namespace Conference_registration, where it shall be known under the name REGISTRAR." This enables the Eiffel compiler to make the result available in the proper place for the benefit of client .NET assemblies, whether they originated in the same language or in another one. Now reconsider the beginning of the ASP.NET page shown earlier: <%@ Assembly ... The rest as before ... The second line says to import the namespace Conference_registration, and that does the trick. A namespace is an association between class names, a way of saying "The class name A denotes that code over there, and the class name B denotes this other code here." In that association, the class name REGISTRAR will denote the Eiffel class above, since we took care of registering it under that name in the dotnet_name entry of its indexing clause. The basic technique will always be the same: - When you compile one or more classes written in language L1, you specify the namespaces into which they will be compiled and the final names that they must retain in that language. - When you write a system in a language L2the same as L1, or another oneyou specify one or more namespaces to "import"; they will define how to understand any class name to which your system may refer. The details may vary depending on the languages involved. On the producer side, L1, you may retain the original class names or, as in the preceding Eiffel example, explicitly specify an external class name. On the consumer side, you may have mechanisms to adapt the names of external classes and their features to the conventions of L2. Some flexibility is essential here, since what's acceptable as an identifier in one language may not be in another: Visual Basic, for example, accepts a hyphen in a feature name, as in my-feature, but most other languages don't, so you'll need some convention to accept the feature under a different name. What's important is that you can have access to all the classes and features from any other .NET language. Combining Different Language Models How does the interoperability work in practice? The first key idea is to map all software to the .NET Object Model. Once compiled, classes don't reveal their language of origin. Starting from a source language, the compiler will map your programs into a common target, as shown in "Combining Different Language Models." This by itself isn't big news, since we could use the same figure to explain how compilers map various languages to the common model of, say, the Intel architecture. What is new is that the object model, as we've seen in detail, retains high-level structures such as classes and inheritance that have direct equivalents in source programs written in modern programming languages, especially object-oriented ones. This is what allows modules from different languages to communicate at the proper level of abstraction, by exchanging objectsall of which, as .NET objects, are guaranteed to have well-understood, language-independent properties. Object Model Discrepancies Of course, the. The case of non-OO languages is the most obvious: Right from the initial announcements, .NET has included languages like APL and Fortran, which no one would accuse of being object oriented. Even if we restrict our attention to object-oriented languages, we'll find discrepancies. Each has its own object model; while the key notionsclass, object, inheritance, polymorphism, dynamic bindingare common, individual languages depart from the .NET model in some significant respects: - Eiffel and C++ allow multiple inheritance; the .NET object model (as well as Java, C# and Visual Basic .NET) permits a class to inherit from only one class, although it may inherit from several interfaces. - Eiffel and C++ each support a form of genericity (type parameterization): You can declare an Eiffel class as LIST [G]to describe lists of objects of an arbitrary type Gwithout saying what Gis; then you can use the class to define types LIST [INTEGER], LIST [EMPLOYEE], or even LIST [LIST [INTEGER]]. C++'s templates pursue a similar goal. This notion is unknown to the .NET object model, although planned as a future addition; currently, you have to write a LISTclass that will manipulate values of the most general type, Object, and then cast them back and forth to the types you really want. - The .NET object model permits in-class overloading: Within a class, a single feature name may denote two or more features. Several languages disallow this possibility as incompatible with the aims of quality object-oriented development. These object model discrepancies raise a serious potential problem: How do we fit different source languages into a common mold? There are two basic approaches: Either change the source language to fit the model, or let programmers use the language as before, and provide a mapping through the compiler. No absolute criterion exists: Both approaches are found in current .NET language implementations. C++ and Eiffel for .NET provide contrasting examples. The Radical Solution C++ typifies the Procrustean solution: Make the language fit the model. To be more precise, on .NET, the name "C++" denotes not one language, but two: Unmanaged and Managed C++. Classes from both languages can coexist in an application: Any class marked __gc is managed; any other is unmanaged. The unmanaged language is traditional C++, far from the object model of .NET; unmanaged classes will compile into ordinary target code (such as Intel machine code), but not to the object model. As a result, they don't benefit from the Common Language Runtime and lack the seamless interoperability with other languages. Only managed classes are full .NET players. But if you then look at the specifications for managed classes, you'll realize that you're not in Kansas any more (assuming, for the sake of discussion, that Kansas uses plain C++). On the "no" side, there's no multiple inheritance except from (you guessed it) completely abstract classes, no support for templates, no C-style type casts. On the "yes" side, you'll find new .NET mechanisms such as delegates (objects representing functions) and properties (fields with associated methods). If this sounds familiar, that's because it is: Managed C++ is very close to C#, in spite of what the default Microsoft descriptions would have you believe. Predictably, the restrictions also rule out any cross-inheritance between managed and unmanaged classes. The signal to C++ developers is hard to miss: The .NET designers don't think too highly of the C++ object model and expect you to move to the modern world as they see it. The role of Unmanaged C++ is simply to smooth the transition by allowing C++ developers to move an application to the managed side one class at a time. An existing C++ application will compile straight away as unmanaged. Then you'll try declaring specific classes as managed. The compiler will reject those that violate the rules of the managed world, for example, by using improper casts; the error messages will tell you what you must correct to turn these classes into proper citizens of the managed world. For C++, this is indeed a defensible policy, as the language's object modeldefined to a large extent by the constraint of backward compatibility with C, a language more than three decades oldis obsolete by today's standards. Respecting Other Object Models Only time will tell how successful the .NET strategy will be at convincing C++ programmers to move over to the managed world. But even if they wholeheartedly comply, it won't mean that other languages should follow the same approach. This is particularly true of object-oriented languages that have their own views of what OO should be, with perhaps better arguments than C++. If you've chosen a language precisely because it supports such expressive mechanisms as multiple inheritance, Design by Contract and genericity, do you have to renounce them and step down to the lowest common denominator once you decide to use .NET? Fortunately, the answer is no, at least not if "you" here means the programmer. The scheme described in "Combining Different Language Models" doesn't require that all languages adhere to the .NET object model; rather that they map to that model. That mapping can be made the responsibility of compilers rather than programmers, enabling programming languages to retain their normal semantics, and establishing a correspondence between the specific semantics of each language and the common rules of the common object model. Tune in next issue and discover how this all works out.
http://www.drdobbs.com/polyglot-programming/184414854
CC-MAIN-2014-35
refinedweb
2,614
52.7
TL;DR When a module is imported into a script, the module's code is run when the script is run. This is useful for unit testing Like, most programming languages, python has special variables. A peculiar special variable in python is the __name__ variable. When you have a script that can act as both a module and a script, you'd probably need this conditional statement. One thing about python modules is that if imported, into a script all the code in that module is run when the script is run. module.py print("I am module.py") def func1(): return "The first function was called" if __name__ == "__main__": print(func1()) # When this module (or script) is run this would be the output I am module.py The first function was called script.py import module print("I would be printed after 'I am module.py'") # When this script is run, this would be the output I am module.py I would be printed after 'I am module.py' # Note, func1 wasn't called Now, let's assume we have a script with useful utility functions. We want to be able to test our script and also export it as a module. We would put our unit tests in the conditional if __name__ == "__main__" Try this out yourself. Investigate it. Then learn about unit testing in python. Thanks for reading. Discussion
https://practicaldev-herokuapp-com.global.ssl.fastly.net/vicradon/the-if-name-main-conditional-in-python-4b4g
CC-MAIN-2021-04
refinedweb
232
83.05
Talk:United States/Bicycle Networks Contents USBR 25 in Ohio As far as I can tell, USBR 25 in Ohio hasn't been given any attention by ODOT. However, the regional planning commissions and local governments in southwest Ohio have assumed for years that it'll use the Little Miami Scenic Trail and/or Great Miami River Recreation Trail, both of which conveniently fall within ACA's route 25 corridor in that part of the state. [1] – Minh Nguyễn (talk, contribs) 06:30, 24 April 2014 (UTC) UGRR and UGR Minh, my intent here was to express UGRR primarily and UGR in the specific expected, intended badge rendering sense of how OpenCycleMap renders. Call it "tagging for the wiki page," if you must. It works sometimes to get a call and response, as I seem to have done here. And spill into Discussion, so here it is. Worded that way ("often abbreviated UGRR or UGR") it does walk up to an ambiguity or underspecification of how I meant all that. I meant "UGRR often, and UGR as an OCM 3 character (turquoise, if I am being specific about color) badge/shield." In this project, the one we frequently see using that particular renderer. Writing into a wiki is a way of uttering consensus. If some chalk line remains dusty, let's clear it up. So, UGRR almost every single place I've seen it abbreviated, we agree. UGR as an internal placeholder reality consensus along-the-way mention as to what we do and shall see in OCM as a shield (as Andy told me OCM shield alphanumerics max at three chars). Briefly, it's asserting partly that UGRR is effectively truncated to UGR in OCM renderings AND it's asserting partly some projection of the three characters U, G and R as meaning a particular something in the rcn=* namespace. Yes, they mean "The Underground Railroad Bicycle Route" and I think the tags are in good shape, as I continue to listen. It looks like the wiki says UGRR and UGR (once), then with your changes, uses UGRR as the only abbreviation after that. I'm perfectly OK with that, as "we have all four characters to use" (in wiki-world) rather than the restricted-to-three-characters namespace of rcn. That's all that was. stevea @ 05:30, 4 July 2015 (UTC) - Hi Steve, even though OCM is the preeminent OSM-based cycle renderer, we should still prefer to tag real-world usage over the limitations of a particular renderer. Andy may have good reasons for limiting OCM's badges to three characters, but there certainly are renderers that can handle four. Moreover, there are plenty of bicycle routes that have longer abbreviations, such as the Great Miami River Recreational Trail (GMRRT). (That particular trail has a number, which I've tagged instead, but imagine the state of things before that trail became part of a regional route network.) So anyways, that's why I made that edit. :^)– Minh Nguyễn (talk, contribs) 22:58, 4 July 2015 (UTC) Quasi-national details We might consider Underground Railroad Bicycle Route (UGR) as a next "national in scope" route candidate to be tagged quasi-national, as at least one state (Ohio) leans towards promoting its UGR segment to USBR 25. Presently, UGR is denoted as several statewide network=rcn routes. As USBR 25 emerges from a UGR segment becoming an approved USBR, should we promote remaining UGR network=rcn segments (>2000 miles) into a named quasi-national network=ncn route super-relation, and include newly-numbered USBR 25 as a member of that new UGR super-relation, similar to how USBR 45 and 45A in Minnesota overlap the identical Minnesota segment of the MRT super-relation? (As Minnesota wishes to brand both MRT and 45 at the same time, these two "separate but identical" routes are an accommodation, they are actually duplicate relations containing identical members). No, this is not a correct approach: UGR is private (not quasi-private, like MRT), so by convention (private routes like ACA's are purposefully "suppressed" in the network hierarchy to no higher than regional) UGR should not be promoted to quasi-national. Please see the USBRS Discussion page. However, should other states where MRT (but not private UGR) members gain AASHTO approval for their states' segments (if they do), those MRT (but not UGR) state relations can be newly numbered and "trade places" from named to numbered in the named, quasi-national route. This is a method by which a named quasi-national route can be replaced by a numbered USBR route, one state at a time. Curiously, as USBR 7 is one of the earliest USBRs to achieve full completion (in all three states of Vermont, Massachusetts and Connecticut), it did not "replace" WNEG, as WNEG also exists as a "separate but identical" (to USBR 7) super-relation. This precedes another super-relation potentially including as members both WNEG and Route Verte segments in Canada, when it then makes sense to tag this new route as international (network=icn), diverging from the USBR 7 super-relation which will remain wholly within the US as a distinctly national route. So, named regional routes which may become quasi-national devoid of segments containing USBR numbered routes don't get tagged network=ncn unless and until one state in the named route "goes first" by gaining a USBR. Then, only as and if other states follow, the route converts (state at a time) from named quasi-national super-relation to numbered national USBR super-relation. This cautious process respects the state-by-state growth of the USBRS and helps prevent the map from "getting ahead of routing." OSM contains a few bicycle routes published by Adventure Cycling Association (ACA), a national (US focused) non-governmental membership organization promoting long-distance bicycle travel. ACA routes in OSM (correctly tagged cycle_network=US:ACA) are now largely tagged "regional" (network=rcn) as they span entire states and frequently cross state lines, so for some routes it may seem to be more correct to characterize scope as quasi-national and promote to network=ncn. But, as ACA route data are private (copyrighted by ACA, not government-published), these actually should not be entered into OSM at all, though a few have been. Unfortunately, these often represent older route data, since improved by ACA, resulting in OSM containing obsolete data. The Discussion page offers a forum to discuss whether these remain network=rcn, or if OSM gets permission to enter them, whether we promote to network=ncn as quasi-national routes, or promote to network=icn when routes cross an international border. A broader topic is whether OSM keeps ACA routes at all, as doing so violates ODBL. Updating ACA routes in OSM (as frequently as ACA updates them) costs significant ongoing editing effort. The quest is on to better clarify this: might ACA accept OSM volunteer editing efforts, provided they meet certain standards? OSM-US and ACA could discuss this topic further, but for now, "only about 2.6" (out of two dozen) ACA routes are entered into OSM. So while it isn't overwhelming, this does get discussed between ACA and OSM, with a consensus that things are presently OK: if ACA routes remain as private (proprietary) routes, OSM minimizes them in the network hierarchy (to regional) to avoid confusion with established national namespaces (the USBRS and what OSM calls quasi-national, quasi-private routes). Another route may seemingly resonate with "quasi-national bicycle" semantics in OSM: American Discovery Trail (ADT). Described as "the first coast to coast, non-motorized trail," ADT is open to hikers and to an only slightly lesser degree, bicyclists and equestrians. As ADT's "sponsor" ADTS now publishes them, route data seem incompatible with OSM's ODBL: ADTS' "Data Books" cost money and the order page says "not to be posted on the Internet in any form." However, if an OSM volunteer were to ride or hike ADT segments and upload a GPX track, those data may be compatible with OSM's ODBL. Investigation continues while respecting ADTS's request to not upload their published data to "the Internet in any form." (ADT is in OSM in Iowa and parts of Ohio and West Virginia). ADTS proposed legislation (the National Discovery Trails Act, or NDTA) to add ADT to the United States Department of Interior's National Park System endeavor, National Trails System (NTS, a network of scenic, historic, and recreation trails created by the National Trails System Act of 1968). If NDTA becomes law, ADT route data become public domain. In 2014 NDTA was introduced in both the US House and Senate. The bill has passed the Senate three times, and on July 13, 2015, bipartisan legislation to make this happen — NDTA or H.R. 2661 — has been introduced in Congress by Rep. Jeff Fortenberry of Nebraska with Rep. Jared Huffman of California as lead co-sponsor. A WikiProject enters and updates long-distance hiking trails: NTS trails and "Other Interstate Trails" (including ADT). Roads and highways suitable for passenger car travel are not eligible for designation as National Recreation Trails in the NTS: while NTS trails are primarily for hiking, some allow additional travel modalities such as bicycles, equestrians, snowmobiles, roller-skates/blades, or all-terrain vehicles. In OSM, the network=* tag is used for Walking Routes, Cycle routes and many other routes, so use separate relations (for hiking, bicycling, equestrians, with appropriate network=* tag) for each segment (nwn, ncn, nhn...) where a particular travel modality is allowed. Note that network=ncn implies pavement for a road bike, not a mountain bike. With ADT, this likely means a relation which skips a network=* tag altogether and tags route=mtb for those segments where mountain biking is permitted (no pavement). Therefore, while ADT may indeed be semantically "national scope" as an off-road/no pavement bicycle route, no characterization for a network=* tag is necessary since this tag is only used with route=bicycle not route=mtb. If entered, ADT will likely be a very long route=mtb, and so not strictly categorized as quasi-national, as it is not a route=bicycle. stevea @ 22:34, 14 September 2016 (UTC) History of the USBRS, route by route The U.S. Bicycle Route System (USBRS) began to be established as a national numbered bicycle network in the 1970s. Circa late 1970s/early 1980s, the American Association of State Highway and Transportation Officials (AASHTO) formally inaugurated the USBRS, which originally consisted of two routes: - USBR 1 in North Carolina and Virginia and - USBR 76 in Virginia. The System languished between the mid-1980s and 1990s. A "National Corridor Plan" was developed during the 2000s, allowing each of the fifty states of the USA to harmoniously develop USBRs using a cohesive national numbered grid of planned route corridors and a regularized numbering protocol (east-west routes are even-numbered, north-south routes are odd-numbered, spur/belt/alternate routes are preceded by a hundreds-place digit or suffixed with "A"). May 2011 saw the first major expansion of the nascent system. Five new parent routes, two child routes, and one alternate route were created, along with modifications to the existing routes in Virginia and the establishment of USBR 1 in New England. stevea @ 23:03, 14 September 2016 (UTC) - U.S. Bicycle Route 1 now has an additional segment through Maine and New Hampshire, - U.S. Bicycle Route 1A is a sea-side alternate route for USBR 1 in Maine, - U.S. Bicycle Route 8 runs from Fairbanks, Alaska, along the Alaska Highway, to the Canadian border, - U.S. Bicycle Route 20 runs from the Saint Clair River through Michigan to Lake Michigan, - U.S. Bicycle Route 76 was extended westward through Kentucky and Illinois, - U.S. Bicycle Route 87 follows the Klondike Highway from the Alaska Marine Highway terminal in Skagway to the Canadian border, - U.S. Bicycle Route 95 follows the Richardson Highway from Delta Junction, Alaska to the Alaska Marine Highway terminal in Valdez, - U.S. Bicycle Route 97 runs from Fairbanks, through Anchorage, to Seward, Alaska, - U.S. Bicycle Route 108 runs from its parent route in Tok, Alaska to Anchorage and - U.S. Bicycle Route 208 follows the Haines Highway from the Alaska Marine Highway terminal in Haines to the Canadian border. In May 2012 Michigan added: - U.S. Bicycle Route 35 from the Canadian border at Sault Ste. Marie southerly to the Indiana state line near New Buffalo. The Mississippi River Trail (MRT) is signed similar to the USBRS (see below), though MRT is not strictly part of USBRS. However, in May 2013 Minnesota received AASHTO approval for its MRT segments to become: - U.S. Bicycle Route 45 from the Iowa state line north of New Albin, Iowa north through Minneapolis to Brainerd, then northeasterly to Jacobson, then westerly to Bemidji and southerly to near Lake George, and - U.S. Bicycle Route 45A from Brainerd northerly to Cass Lake. Minnesota uses both MRT and 45 designations, keeping the MRT brand while blending in the newer USBR 45. MRT may continue (as it did in Minnesota) to become USBR 45 in other states, though there are currently no active proposals to do so. In May 2013 Missouri added: - U.S. Bicycle Route 76 from Claryville westward through St. Mary, Farmington, Pilot Knob, Centerville, Eminence, Houston, Marshfield, Ash Grove and Golden City to Kansas. In October 2013 Tennessee and Maryland (respectively) added: - U.S. Bicycle Route 23 from the Kentucky state line at Kentucky's Mammoth Cave state bicycle route, south via Robertson County, White House, Nashville and Franklin to Ardmore, Alabama and - U.S. Bicycle Route 50 as the Maryland segments of the Chesapeake & Ohio Bicycle Trail and the Great Allegheny Passage Trail. In May 2014 Massachusetts, Washington (state), Illinois (36 & 37), the District of Columbia and Ohio (respectively) added: - U.S. Bicycle Route 1 from the Museum of Science in Boston westerly along the Paul Dudley White/Charles River Path to Auburndale Park in Newton, then from West Street in Everett northeasterly along the Northern Strand Community Trail/Bike to the Sea to Lincoln Avenue in Saugus, - U.S. Bicycle Route 10 from the Idaho state line westerly, primarily along Washington State Route 20 to Anacortes, then via ferry to Friday Harbor and Sidney, British Columbia, - U.S. Bicycle Route 36 south from Buckingham Fountain in Chicago to Eggers Wood to connect eastward to Indiana, - U.S. Bicycle Route 37 north from Buckingham Fountain in Chicago to the Wisconsin state line at Robert McClory Trail, - U.S. Bicycle Route 50 along the C&O Canal path to Maryland and - U.S. Bicycle Route 50 from Indiana near New Westville easterly through Lewisburg, Brookville, Dayton, Xenia, Cedarville, London, Indian Ridge Area, Columbus, northerly to Westerville, then easterly through New Albany, Scott Corners, Newark and Steubenville into West Virginia. In December 2014 Florida (1 & 90/90A), Massachusetts, Virginia (1 & 76), Michigan and Maryland (respectively) added, updated or realigned: - U.S. Bicycle Route 1 from Key West to Jacksonville, ending at the Georgia state line, - U.S. Bicycle Route 1 with two new segments: a northerly one through Salisbury and Newburyport and a southerly one through Topsfield, Wenham, Danvers and Peabody, - U.S. Bicycle Route 1 provides a safer and more reliable cyclist route through Fort Belvoir, Mount Vernon and Old Town Alexandria, ending at the 14th Street Bridge in Washington DC, - U.S. Bicycle Route 10 connects the eastern and central portions of Michigan’s Upper Peninsula: its eastern terminus with USBR 35 in Saint Ignace travels west to Iron Mountain near Wisconsin, - U.S. Bicycle Route 11 from the Pennsylvania state line northwest of Hagerstown to Harpers Ferry, West Virginia, - U.S. Bicycle Route 76 north of Lexington to near Vesuvius, - U.S. Bicycle Route 90 from the Alabama state line to Florida’s Atlantic Coast in Butler Beach, just south of Saint Augustine and - U.S. Bicycle Route 90A as an alternate to USBR 90 around Pensacola. In May 2015 Idaho, Minnesota and Utah (respectively) added or realigned: - U.S. Bicycle Route 10 from Oldtown at the Washington state line eastward through Sandpoint and Clark Fork to Montana, - U.S. Bicycle Route 10A as various belts offering alternate routes through Idaho, - U.S. Bicycle Routes 45 and 45A with some minor realignments, - U.S. Bicycle Route 70 from Colorado on US 491 westerly through Monticello and Blanding, crossing the Colorado River and serving Hanksville, Torrey, Escalante, Henrieville, Cannonville, Tropic Junction, Bryce Canyon Junction and Panguitch to Cedar City and - U.S. Bicycle Route 79 from Cedar City northerly to Minorsville and Milford then westerly and northwesterly to Garrison and Nevada near Baker and US 6. In September 2015 Vermont, Georgia, Indiana, Ohio, Kansas and Arizona (respectively) added: - U.S. Bicycle Route 7 from Canada to Massachusetts via Burlington southward, - U.S. Bicycle Routes 21, 321 and 521 from Atlanta to Tennessee and as two spurs, - U.S. Bicycle Routes 35, 35A, 36 and 50 from Kentucky via Clarksville to Michigan via Hesston, in Indiana's Hamilton, Marion, Shelby and Clark counties, Illinois via Highland to Michigan via Town of Pines and Illinois via Terre Haute to Ohio via Richmond, - U.S. Bicycle Route 50A as an alternate route around Alexandria and Westerville, - U.S. Bicycle Route 76 from Missouri via Pittsburgh to Colorado via Scott City and - U.S. Bicycle Route 90 from New Mexico through Tucson and Phoenix to California. In June 2016 Connecticut, Massachusetts, Idaho, Virginia and Georgia (respectively) added or realigned: - U.S. Bicycle Route 7 from the junction of East Coast Greenway and Western New England Greenway (Westport) north through Danbury, New Milford, Bulls Bridge, Kent, Cornwall Bridge, West Cornwall, Falls Village and west of Canaan to Massachusetts, - U.S. Bicycle Route 7 from the Connecticut border northward through Ashley Falls, Sheffield, Great Barrington, Stockbridge, Lenox, around Pittsfield to Cheshire, Adams, North Adams, and Williamstown to Vermont, - U.S. Bicycle Route 10 with minor route realignments around Sandpoint and Ponderay/Kootenai, - U.S. Bicycle Route 176 as a belt off of USBR 76 to USBR 1 and - U.S. Bicycle Route 621 as a spur off of USBR 21. International routes additionally designated quasi-national One network=icn route=bicycle, ISL, is also tagged cycle_network=US;CA to denote that it contains components in both countries. This tagging is experimental and may be modified or eliminated. – Stevea (talk, contribs) 21:38, 13 November 2016 (UTC)
http://wiki.openstreetmap.org/wiki/Talk:United_States/Bicycle_Networks
CC-MAIN-2017-04
refinedweb
3,089
50.16
Colossus is the successor to the Google File System (GFS) as mentioned in the recent paper on Spanner on OSDI 2012. Colossus is also used by spanner to store its tablets. The information about Colossus is slim compared with GFS which is published in the paper on SOSP 2003. There is still some information about Colossus on the Web. Here, I list some of them. Storage Architecture and Challenges On Faculty Summit, July 29, 2010, by Andrew Fikes, Principal Engineer. The slides. Some interesting points: - Storage Software: Colossus - Next-generation cluster-level file system - Automatically sharded metadata layer - Data typically written using Reed-Solomon (1.5x) - Client-driven replication, encoding and replication - Metadata space has enabled availability analyses - Why Reed-Solomon? - Cost. Especially w/ cross cluster replication. - Field data and simulations show improved MTTF - More flexible cost vs. availability choices GFS: Evolution on Fast-forward An interview with Google’s Sean Quinlan by the Association for Computer Machinery (ACM). Some important info: - “We also ended up doing what we call a “multi-cell” approach, which basically made it possible to put multiple GFS masters on top of a pool of chunkservers.” - “We also have something we called Name Spaces, which are just a very static way of partitioning a namespace that people can use to hide all of this from the actual application.” … “a namespace file describes” - “The distributed master certainly allows you to grow file counts, in line with the number of machines you’re willing to throw at it.” … “Our distributed master system that will provide for 1-MB files is essentially a whole new design. That way, we can aim for something on the order of 100 million files per master. You can also have hundreds of masters.” - BitTable “as one of the major adaptations made along the way to help keep GFS viable in the face of rapid and widespread change.” Google File System II: Dawn of the Multiplying Master Nodes Comments on GFS2 (colossus) by Cade Metz in San Francisco. The article and some excerpt. This page is linked by a director from Google for reference to Colossus !
https://www.systutorials.com/3202/colossus-successor-to-google-file-system-gfs/
CC-MAIN-2017-17
refinedweb
353
54.93
[reportlab-users] underlines and horizontal spacing Henning von Bargen H.vonBargen at t-p.com Fri Oct 31 04:30:18 EDT 2008 Previous message: [reportlab-users] limits of KeepTogether() ? Next message: [reportlab-users] Add comments to PDF? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] Chris Foster wrote: > I'm using intra-paragraph markup (<u>...</u>) for underlines and my > users aren't happy with the line extending into the space between > words. (It actually looks OK to me, but they're picky.) Is there a > better way to do underlining or maybe get more spacing between words? You might try the Paragraph class from wordaxe.rl.NewParagraph (which will also be used by default as if you say from wordaxe.rl.Paragraph import Paragraph) from the wordaxe-0.3.0 release ( ). This implementation handles spaces explicitly, whereas the reportlab.platypus.Paragraph implementation only stores the words (not the spaces inbetween). Using the wordaxe implementation, you can use Paragraph("<u>Only</u> <u>words</u> <u>are</u> <u>underlined</u>.", style) This is untested, but should work as expected. For more spacing between words, you probably have to do some coding, because the current implementation converts multiple spaces to a single space. Or you could use a transparent inline image with <img>. Take a look at Dinu Gherman's alternative paragraph implementation, too. Henning Previous message: [reportlab-users] limits of KeepTogether() ? Next message: [reportlab-users] Add comments to PDF? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] More information about the reportlab-users mailing list
http://two.pairlist.net/pipermail/reportlab-users/2008-October/007618.html
CC-MAIN-2022-27
refinedweb
255
52.36
I think everyone at some point in time wants to embed a break point in there code, whether it be for debugging purposes, path tracing, or detecting edge conditions that have not yet been tested. When I hit a break point, I would prefer that the debugger break in at the call frame which needs the break point and not another function which implements the break point. I have seen two different patterns over the years. The first pattern is to call DbgBreakPoint(). This works well enough. It is portable across different processor architectures so you don't have to worry about new platforms, but it has one major problem. When you break into the debugger, you are in the wrong call site! You end up in the middle of DbgBreakPoint() itself and not the caller. You can use the gu command, but that requires typing ;). The second pattern is to create a #define for some inline assembly, something akin to: #define TRAP __asm { int 3 } This satisfies my requirement that the point of execution when we stop is in the function of interest, but this solution is platform specific. You would need enough knowledge per supported platform for this to work and that platform must support inline assembly (which x64 does not for our compilers). You could compromise and #define TRAP to DbgBreakPoint() for those platforms, but then you have a satisfactory solution on a subset of platforms. Enter the __debugbreak() intrinsic that I just learned about this weekend from the OSR WinDBG mailing list (courtesy of Pavel A.) Yes, it is a Microsoft specific extension, but generating a break point is inheritly platform specific already. __debugbreak() is the best of both patterns. You get platform independence and when you break in to the debugger, you are sitting at the right call frame and not inside a system routine. That rocks in my book! PS: I don't know which version of the compiler this intrinsic was introduced, but it is used in the Server 2003 SP1 DDK, so I know it has been implemented for awhile. PPS: I never call __debugbreak() in production code, and IMHO, neither should you. To control this in a DDK build environment, I do the following #if DBG #define TRAP() __debugbreak() #else // DBG #define TRAP() #endif // DBG
http://blogs.msdn.com/b/doronh/archive/2006/05/08/592735.aspx
CC-MAIN-2014-41
refinedweb
384
69.01
Hi Peter, I'm trying to access the list of targets to be executed, not the list of ALL targets in the build file. I'd already looked at the javadocs and the source code.... I tried using getTargets(), but that returns a list of all targets in the project's build file. Obviously, if I had the basic list, I could use topoSort(...) to resolve dependencies, but I'm aware of that, and that's not the problem here. The list of targets being executed seems to be stored as local variables within method calls, and is not stored as an instance variable in "Project.java". In the source code for Ant 1.5.3, in line 609 of org/apache/tools/ant/Main.java, a private instance variable (Vector targets) is passed to the "executeTargets" method of Project, but these targets are not exposed either in "Main.java" or "Project.java", so I can't tell if a specific target is going to be called (unless of course a BuildException occurs). Furthermore, I don't think it's wise to access "Main.java", as it's possible to execute targets in a project directly using the API without using "main" (I think so anyway, I've never tried). Thanks anyway, Chris ----- Original Message ----- From: "peter reilly" <peter.reilly@corvil.com> To: "Ant Users List" <user@ant.apache.org> Sent: Friday, May 23, 2003 6:33 PM Subject: Re: How can I determine which targets will be executed, using the Ant API? The target info is maintained per project in the Project object's targets hashmap; Project uses "topoSort()" to sort the targets. These are publiclly visable. The following task uses this information. This does not deal with targets generated using ant or antcall. import org.apache.tools.ant.Project; import org.apache.tools.ant.Task; import org.apache.tools.ant.Target; import org.apache.tools.ant.BuildException; import java.util.*; public class TargetDepends extends Task { private String target; public void setTarget(String target) { this.target = target; } public void execute() { if (target == null) { throw new BuildException( "Need to specify target"); } Vector dependList = getProject().topoSort( target, getProject().getTargets()); log("Target [" + target + "] depends on"); for (int i = 0; i < dependList.size(); ++i) { Target t = (Target) dependList.get(i); log(" - " + t.getName()); if (t.getName().equals(target)) { break; } } } } Peter On Friday 23 May 2003 13:39, Chris Brown wrote: > Hello, > > > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscribe@ant.apache.org > For additional commands, e-mail: user-help@ant.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscribe@ant.apache.org For additional commands, e-mail: user-help@ant.apache.org
http://mail-archives.apache.org/mod_mbox/ant-user/200305.mbox/%3C008301c3214b$49d8bab0$0a14a8c0@terre%3E
CC-MAIN-2014-23
refinedweb
436
52.66
Microsoft Teams JavaScript client SDK The Microsoft Teams JavaScript client SDK is part of the Microsoft Teams developer platform. It makes it easy to integrate your own services with Teams, whether you develop custom apps for your enterprise or SaaS applications for teams around the world. See The Microsoft Teams developer platform for full documentation on the platform and on the SDK. Finding the SDK The Teams client SDK is distributed as an npm package. The latest version can be found here:. Installing the SDK You can install the package using npm or yarn: npm install --save @microsoft/teams-js yarn add @microsoft/teams-js Using the SDK If you are using any dependency loader or module bundler such as RequireJS, SystemJS, browserify, or webpack, you can use import syntax to import specific modules. For example: import * as microsoftTeams from "@microsoft/teams-js"; You can also reference the entire library in html pages using a script tag. There are three ways to do this: Important Do not copy/paste these <script src=... URLs from this page; they refer to a specific version of the SDK. To get the <script src=...></script> markup for the latest version, always go to. <!-- Microsoft Teams JavaScript API (via CDN) --> <script src="" crossorigin="anonymous"></script> <!-- Microsoft Teams JavaScript API (via npm) --> <script src="node_modules/@microsoft/teams-js@1.5.2/dist/MicrosoftTeams.min.js"></script> <!-- Microsoft Teams JavaScript API (copied local) --> <script src="MicrosoftTeams.min.js"></script> The final option, using a local copy on your servers, eliminates that dependency but requires hosting and updating a local copy of the SDK. Tip If you are a TypeScript developer it is helpful to install the NPM package as described above, even if you don't link to the copy of MicrosoftTeams.min.js in node_modules from your HTML, because IDEs such as Visual Studio Code will use it for Intellisense and type checking. Reference The following sections contain reference pages for all the elements of the Teams client API. These pages are auto-generated from the source found in the npm module on. The source code for the SDK is located at. And remember that The Microsoft Teams developer platform has full documentation on using the platform and the SDK.
https://docs.microsoft.com/de-de/javascript/api/overview/msteams-client?view=msteams-client-js-latest&preserve-view=true
CC-MAIN-2022-05
refinedweb
374
63.29
. Here are the prerequisites to install Drools Plugin − As Drools is a BRMS (Business Rule Management System) written in Java, we will be covering how to add the desired plugins in this section. Considering maximum Java users use Eclipse, let’s see how to add the Drools 5.x.0 plugin in Eclipse. Download the binaries from the following link − After the download is complete, extract the files to your hard disk. Launch Eclipse and go to help→install new software. Click on Add as shown in the following screenshot. Thereafter, click on Local as shown here and select "…/binaries/org.drools.updatesite". Select Drools and jBPM and click Next. Again click Next. Thereafter, accept the terms and license agreement and click Finish. Upon clicking Finish, the software installation starts − Post successful installation, you will get the following dialog box − Click on yes. Once Eclipse restarts, go to Windows → Preferences You can see Drools under your preferences. Your Drools plugin installation is complete now. Drools Runtime is required to instruct the editor to run the program with specific version of Drools jar. You can run your program/application with different Drools Runtime. Click on Windows → Preference → Drools → Installed Drools Runtime. Then click on Add as shown in the following screenshot. Thereafter, click on Create a new Drools Runtime as shown here. Enter the path till the binaries folder where you have downloaded the droolsjbpm-tools-distribution-5.3.0.Final.zip Click on OK and provide a name for the Drools Runtime. The Drools runtime is now created. To create a basic Drools program, open Eclipse. Go to Fileb → New → Project. Select Drools Project. Give a suitable name for the project. For example, DroolsTest. The next screen prompts you to select some files which you want in your first Drools project. Select the first two files. The first file is a .drl file (Drools Rule File) and the second file is a Java class for loading and executing the HelloWorld rule. Click on Next → Finish. Once you click on Finish, a <DroolsTest> project is created in your workspace. Open the Java class and then right-click and run as Java application. You would see the output as shown here − Next, we will discuss the terms frequently used in a Rule Engine. The. If you see the default rule that is written in the Hello World project (Sample.drl), there are a lot of keywords used which we will be explaining now. Package − Every Rule starts with a package name. The package acts as a namespace for Rules. Rule names within a package must be unique. Packages in Rules are similar to packages in Java. Import statement − Whatever facts you want to apply the rule on, those facts needs to be imported. For example, com.sample.DroolsTest.Message; in the above example. Rule Definition − It consists of the Rule Name, the condition, and the Consequence. Drools keywords are rule, when, then, and end. In the above example, the rule names are “Hello World” and “GoodBye”. The when part is the condition in both the rules and the then part is the consequence. In rule terminology, the when part is also called as LHS (left hand side) and the then part as the RHS (right hand side) of the rule. Now let us walk through the terms used in the Java file used to load the Drools and execute the rules. Knowledge Base is an interface that manages a collection of rules, processes, and internal types. It is contained inside the package org.drools.KnowledgeBase. In Drools, these are commonly referred to as knowledge definitions or knowledge. Knowledge definitions are grouped into knowledge packages. Knowledge definitions can be added or removed. The main purpose of Knowledge Base is to store and reuse them because their creation is expensive. Knowledge Base provides methods for creating knowledge sessions. The knowledge session is retrieved from the knowledge base. It is the main interface for interacting with the Drools Engine. The knowledge session can be of two types − Stateless Knowledge Session Stateful Knowledge Session Stateless Knowledge Session is a stateless session that forms the simplest use case, not utilizing inference. A stateless session can be called like a function, passing it some data and then receiving some results back. Common examples of a stateless session include − Validation Is this person eligible for a mortgage? Calculation Compute a mortgage premium. Routing and Filtering Filter incoming messages, such as emails, into folders. Send incoming messages to a destination Stateful sessions are longer lived and allow iterative changes over time. Some common use cases for stateful sessions include − Monitoring Stock market monitoring and analysis for semi-automatic buying. Diagnostics Fault finding, medical diagnostics Logistics Parcel tracking and delivery provisioning The KnoledgeBuilder interface is responsible for building a KnowledgePackage from knowledge definitions (rules, processes, types). It is contained inside the package org.drools.builder.KnowledgeBuilder. The knowledge definitions can be in various formats. If there are any problems with building, the KnowledgeBuilder will report errors through these two methods: hasErrors and getError. The following diagram explains the process In the above example, as we are taking a simple example of stateless knowledge session, we have inserted the fact in the session, and then fireAllRules() method is called and you see the output. In case of a stateful knowledge session, once the rules are fired, the stateful knowledge session object must call the method dispose() to release the session and avoid memory leaks. As you saw the .drl (rule file) has its own syntax, let us cover some part of the Rule syntax in this chapter. A rule can contain many conditions and patterns such as − The above conditions check if the Account balance is 200 or the Customer name is “Vivek”. A variable name in Drools starts with a Dollar($) symbol. Drools can work with all the native Java types and even Enum. The special characters, # or //, can be used to mark single-line comments. For multi-line comments, use the following format: /* Another line ......... ......... */ Global variables are variables assigned to a session. They can be used for various reasons as follows − For input parameters (for example, constant values that can be customized from session to session). For output parameters (for example, reporting—a rule could write some message to a global report variable). Entry points for services such as logging, which can be used within rules. Functions are a convenience feature. They can be used in conditions and consequences. Functions represent an alternative to the utility/helper classes. For example, function double calculateSquare (double value) { return value * value; } A dialect specifies the syntax used in any code expression that is in a condition or in a consequence. It includes return values, evals, inline evals, predicates, salience expressions, consequences, and so on. The default value is Java. Drools currently supports one more dialect called MVEL. The default dialect can be specified at the package level as follows − package org.mycompany.somePackage dialect "mvel" MVEL is an expression language for Java-based applications. It supports field and method/getter access. It is based on Java syntax. Salience is a very important feature of Rule Syntax. Salience is used by the conflict resolution strategy to decide which rule to fire first. By default, it is the main criterion. We can use salience to define the order of firing rules. Salience has one attribute, which takes any expression that returns a number of type int (positive as well as negative numbers are valid). The higher the value, the more likely a rule will be picked up by the conflict resolution strategy to fire. salience ($account.balance * 5) The default salience value is 0. We should keep this in mind when assigning salience values to some rules only. There are a lot of other features/parameters in the Rule Syntax, but we have covered only the important ones here. Rule Consequence Keywords are the keywords used in the “then” part of the rule. Modify − The attributes of the fact can be modified in the then part of the Rule. Insert − Based on some condition, if true, one can insert a new fact into the current session of the Rule Engine. Retract − If a particular condition is true in a Rule and you don’t want to act anything else on that fact, you can retract the particular fact from the Rule Engine. Note − It is considered a very bad practice to have a conditional logic (if statements) within a rule consequence. Most of the times, a new rule should be created. In this chapter, we will create a Drools project for the following problem statement − Depending upon the city and the kind of product (Combination of City and Product), find out the local tax related to that city. We will have two DRL files for our Drools project. The two DRL files will signify two cities in consideration (Pune and Nagpur) and four types of products (groceries, medicines, watches, and luxury goods). The tax on medicines in both the cities is considered as zero. For groceries, we have assumed a tax of Rs 2 in Pune and Rs 1 in Nagpur. We have used the same selling price to demonstrate different outputs. Note that all the rules are getting fired in the application. Here is the model to hold each itemType − package com.sample; import java.math.BigDecimal; public class ItemCity { public enum City { PUNE, NAGPUR } public enum Type { GROCERIES, MEDICINES, WATCHES, LUXURYGOODS } private City purchaseCity; private BigDecimal sellPrice; private Type typeofItem; private BigDecimal localTax; public City getPurchaseCity() { return purchaseCity; } public void setPurchaseCity(City purchaseCity) { this.purchaseCity = purchaseCity; } public BigDecimal getSellPrice() { return sellPrice; } public void setSellPrice(BigDecimal sellPrice) { this.sellPrice = sellPrice; } public Type getTypeofItem() { return typeofItem; } public void setTypeofItem(Type typeofItem) { this.typeofItem = typeofItem; } public BigDecimal getLocalTax() { return localTax; } public void setLocalTax(BigDecimal localTax) { this.localTax = localTax; } } As suggested earlier, we have used two DRL files here: Pune.drl and Nagpur.drl. This is the DRL file that executes rules for Pune city. // created on: Dec 24, 2014 package droolsexample // list any import classes here. import com.sample.ItemCity; import java.math.BigDecimal; //())); end rule "Pune Groceries Item" when item : ItemCity(purchaseCity == ItemCity.City.PUNE, typeofItem == ItemCity.Type.GROCERIES) then BigDecimal tax = new BigDecimal(2.0); item.setLocalTax(tax.multiply(item.getSellPrice())); end This is the DRL file that executes rules for Nagpur city. // created on: Dec 26, 2014 package droolsexample // list any import classes here. import com.sample.ItemCity; import java.math.BigDecimal; //())); end rule "Nagpur Groceries Item" when item : ItemCity(purchaseCity == ItemCity.City.NAGPUR, typeofItem == ItemCity.Type.GROCERIES) then BigDecimal tax = new BigDecimal(1.0); item.setLocalTax(tax.multiply(item.getSellPrice())); end We have written the DRL files based on city, as it gives us extensibility to add any number of rule files later if new cities are being added. To demonstrate that all the rules are getting triggered from our rule files, we have used two item types (medicines and groceries); and medicine is tax-free and groceries are taxed as per the city. Our test class loads the rule files, inserts the facts into the session, and produces the output. package com.sample; import java.math.BigDecimal; import org.drools.KnowledgeBase; import org.drools.KnowledgeBaseFactory; import org.drools.builder.KnowledgeBuilder; import org.drools.builder.KnowledgeBuilderError; import org.drools.builder.KnowledgeBuilderErrors; import org.drools.builder.KnowledgeBuilderFactory; import org.drools.builder.ResourceType; import org.drools.io.ResourceFactory; import org.drools.runtime.StatefulKnowledgeSession; import com.sample.ItemCity.City; import com.sample.ItemCity.Type; /* *This is a sample class to launch a rule. */ public class DroolsTest { public static final void main(String[] args) { try { // load up the knowledge base KnowledgeBase kbase = readKnowledgeBase(); StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession(); ItemCity item1 = new ItemCity(); item1.setPurchaseCity(City.PUNE); item1.setTypeofItem(Type.MEDICINES); item1.setSellPrice(new BigDecimal(10)); ksession.insert(item1); ItemCity item2 = new ItemCity(); item2.setPurchaseCity(City.PUNE); item2.setTypeofItem(Type.GROCERIES); item2.setSellPrice(new BigDecimal(10)); ksession.insert(item2); ItemCity item3 = new ItemCity(); item3.setPurchaseCity(City.NAGPUR); item3.setTypeofItem(Type.MEDICINES); item3.setSellPrice(new BigDecimal(10)); ksession.insert(item3); ItemCity item4 = new ItemCity(); item4.setPurchaseCity(City.NAGPUR); item4.setTypeofItem(Type.GROCERIES); item4.setSellPrice(new BigDecimal(10)); ksession.insert(item4); ksession.fireAllRules(); System.out.println(item1.getPurchaseCity().toString() + " " + item1.getLocalTax().intValue()); System.out.println(item2.getPurchaseCity().toString() + " " + item2.getLocalTax().intValue()); System.out.println(item3.getPurchaseCity().toString() + " " + item3.getLocalTax().intValue()); System.out.println(item4.getPurchaseCity().toString() + " " + item4.getLocalTax().intValue()); } catch (Throwable t) { t.printStackTrace(); } } private static KnowledgeBase readKnowledgeBase() throws Exception { KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); kbuilder.add(ResourceFactory.newClassPathResource("Pune.drl"), ResourceType.DRL); kbuilder.add(ResourceFactory.newClassPathResource("Nagpur.drl"), ResourceType.DRL); KnowledgeBuilderErrors errors = kbuilder.getErrors(); if (errors.size() > 0) { for (KnowledgeBuilderError error: errors) { System.err.println(error); } throw new IllegalArgumentException("Could not parse knowledge."); } KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase(); kbase.addKnowledgePackages(kbuilder.getKnowledgePackages()); return kbase; } } If you run this program, its output would be as follows − PUNE 0 PUNE 20 NAGPUR 0 NAGPUR 10 For both Pune and Nagpur, when the item is a medicine, the local tax is zero; whereas when the item is a grocery product, the tax is as per the city. More rules can be added in the DRL files for other products. This is just a sample program. Here we will demonstrate how to call a static function from a Java file within your DRL file. First of all, create a class HelloCity.java in the same package com.sample. package com.sample; public class HelloCity { public static void writeHello(String name) { System.out.println("HELLO " + name + "!!!!!!"); } } Thereafter, add the import statement in the DRL file to call the writeHello method from the DRL file. In the following code block, the changes in the DRL file Pune.drl are highlighted in yellow. // created on: Dec 24, 2014 package droolsexample // list any import classes here. import com.sample.ItemCity; import java.math.BigDecimal; import com.sample.HelloCity; /()); end rule "Pune Groceries Item" when item : ItemCity(purchaseCity == ItemCity.City.PUNE, typeofItem == ItemCity.Type.GROCERIES) then BigDecimal tax = new BigDecimal(2.0); item.setLocalTax(tax.multiply(item.getSellPrice())); end Run the program again and its output would be as follows − HELLO PUNE!!!!!! PUNE 0 PUNE 20 NAGPUR 0 NAGPUR 10 The difference now in the output is marked in yellow which shows the output of the static method in the Java class. The advantage to call a Java method is that we can write any utility/helper function in Java and call the same from a DRL file. There are different ways to debug a Drools project. Here, we will write a Utility class to let you know which rules are being triggered or fired. With this approach, you can check what all rules are getting triggered in your Drools project. Here is our Utility Class package com.sample; import org.drools.spi.KnowledgeHelper; public class Utility { public static void help(final KnowledgeHelper drools, final String message){ System.out.println(message); System.out.println("\nrule triggered: " + drools.getRule().getName()); } public static void helper(final KnowledgeHelper drools){ System.out.println("\nrule triggered: " + drools.getRule().getName()); } } The first method help prints the rule triggered along with some extra information which you can pass as String via the DRL file. The second rule helper prints whether the particular rule was triggered or not. We have added one of the Utility methods in each DRL file. We have also added the import function in the DRL file (Pune.drl). In the then part of the rule, we have added the utility function call. The modified Pune.drl is given below. Changes are highlighted in blue. //created on: Dec 24, 2014 package droolsexample //list any import classes here. import com.sample.ItemCity; import java.math.BigDecimal; import com.sample.HelloCity; import function com.sample.Utility.helper; //()); helper(drools); end rule "Pune Groceries Item" when item : ItemCity(purchaseCity == ItemCity.City.PUNE, typeofItem == ItemCity.Type.GROCERIES) then BigDecimal tax = new BigDecimal(2.0); item.setLocalTax(tax.multiply(item.getSellPrice())); helper(drools); end Similarly, we have added the other utility function in the second DRL file (Nagpur.drl). Here is the modified code − // created on: Dec 26, 2014 package droolsexample // list any import classes here. import com.sample.ItemCity; import java.math.BigDecimal; import function com.sample.Utility.help; /())); help(drools,"added info"); end rule "Nagpur Groceries Item" when item : ItemCity(purchaseCity == ItemCity.City.NAGPUR, typeofItem == ItemCity.Type.GROCERIES) then BigDecimal tax = new BigDecimal(1.0); item.setLocalTax(tax.multiply(item.getSellPrice())); help(drools,"info"); end Run the program again and it should produce the following output − info rule triggered: Nagpur Groceries Item added info rule triggered: Nagpur Medicine Item rule triggered: Pune Groceries Item HELLO PUNE!!!!!! rule triggered: Pune Medicine Item PUNE 0 PUNE 20 NAGPUR 0 NAGPUR 10 Both the utility functions are called and it shows whether the particular rule was called or not. In the above example, all the rules are being called, but in an enterprise application, this utility function can be really useful to debug and find out whether a particular rule was fired or not. You can debug the rules during the execution of your Drools application. You can add breakpoints in the consequences of your rules, and whenever such a breakpoint is encountered during the execution of the rules, execution is stopped temporarily. You can then inspect the variables known at that point as you do in a Java Application, and use the normal debugging options available in Eclipse. To create a breakpoint in your DRL file, just double-click at the line where you want to create a breakpoint. Remember, you can only create a breakpoint in the then part of a rule. A breakpoint can be removed by double-clicking on the breakpoint in the DRL editor. After applying the breakpoints, you need to debug your application as a Drools application. Drools breakpoints (breakpoints in DRL file) will only work if your application is being debugged as a Drools application. Here is how you need to do the same − Once you debug your application as a Drools application, you would see the control on the DRL file as shown in the following screenshot − You can see the variables and the current values of the object at that debug point. The same control of F6 to move to the next line and F8 to jump to the next debug point are applicable here as well. In this way, you can debug your Drools application. Note − The debug perspective in Drools application works only if the dialect is MVEL until Drools 5.x.
https://www.tutorialspoint.com/drools/drools_quick_guide.htm
CC-MAIN-2019-39
refinedweb
3,084
50.94
ReactOS is an open source alternative to the Windows operation system. Even if the first version of ReactOS dates back to 1998, there is still no 'stable' version of ReactOS. May be, the most important reason is a lack of attention. I want to introduce how to compile and run a first very simple C# application at the current version 0.4.7. I run ReactOS on ORACLE VM VirtualBox 5.1.26 and VMWare Workstation Player 12.5.8 and strictly recommend VirtualBox - ReactOS runs much more stable on VirtualBox. The virtual machine has been created with 2048 MByte RAM and 25 GByte HDD. The default network adapter "Intel PRO/1000 MT Desktop (8254OEM)" must be changed to "PC-net PCI II (Am79C970A)" in order to work properly. The first installation I do is Firefox 45.0.1 or 48.0.2 (can be installed from the ReactOS Applications Manager). Immediately after installation, I open [≡]|[Options] and switch [Search]|[Search Engine] to "DuckDuckGo" (my personal affinity to search without supervision and get results not ordered by the preference of the search machine provider) as well as [Advanced]|[Update] "Automatically ..." to "Never ..." (because newer Firefox versions, e.g. 52.0.1esr, will not start on ReactOS). Unfortunately, the installations of .NET Framework 1.1 (even if it is an officially supported installation within the ReactOS 'Applications Manager') and .NET 2.0 SDK fail. style="height: 82px; width: 640px" alt="Image 3" data-src="/KB/dotnet/1222079/dotNetFx11_updade1.png" class="lazyload" data-sizes="auto" data-> There is also an additional message during .NET Framework 2.0 installation: The .NET 2.0 runs through the complete setup procedure without any error message or warning. But the compilation of even the most simple C# application fails with 'fatal error CVT1103' as for older versions of ReactOS. fatal error CVT1103 This prevents installing IDEs that are based on .NET 1.1 or 2.0 (SharpDevelop 1.1 or 2.2, Visual Studio 2003 or 2005, ...). Among the many attempts to install .NET Framework, two installations run through the complete setup procedure without any error message or warning: Although the installations are successful, the compilation of even the most simple C# application fails for both with 'fatal error CVT1103' (for source code, see section Using the Code at the bottom of the article). Nevertheless, the .NET 4.0 Runtime works and we need it for the following actions: Happily, there is an escape from this limiting situation (unable to compile C# code): MONO. The ReactOS versions 0.4.7 and 0.4.8 reports themselves as 'ReactOS [Version 5.2.3790] Service pack 2' (which is equivalent to Windows Server 2003). The latest fully functional MONO version (a good download source is download.mono-project.com/archive), that can be installed for ReactOS version 0.4.7, is: Newer MONO versions, that install successfully but are not fully functional, are e.g.: The MONO versions newer than 3.2.3 contain a MONO runtime (mono.exe) that cannot be executed on ReactOS version 0.4.7. All these runtimes report the same error: ERROR_BAD_EXE_FORMAT - mono.exe is not a valid Win32 application. ERROR_BAD_EXE_FORMAT - mono.exe is not a valid Win32 application. Consequently, I recommend to install MONO 3.2.3. Since MONO 3.2.3 installs to C:\Program Files\Mono-3.2.3 and the MONO versions newer than 3.2.3 install to C:\Program Files\Mono, MONO 3.2.3 - a newer version can be installed in parallel as well. This provides the opportunity to test newer assemblies from case to case. Newer MONO versions, that install successfully but are not fully functional (mcs.exe compiler works, but mono.exe runtime doesn't), are e.g.: The MONO versions newer than 4.0.3 contain a MONO runtime (mono.exe) that cannot be executed on ReactOS version 0.4.8. The MONO versions 4.2.1 and 4.3.2 report the same error: The procedure entry point InitializeConditionVariable could not be located in the dynamic link library KERNEL32.dll. The procedure entry point InitializeConditionVariable could not be located in the dynamic link library KERNEL32.dll. Consequently, I recommend to install MONO 4.0.3. It installs to C:\Program Files\Mono. Consequently, I recommend to install MONO 4.3.2. It installs to C:\Program Files\Mono. All versions, I can recommend (MONO 4.0.3 32Bit on ReactOS 0.4.7/8 and MONO 4.3.2 32Bit in ReactOS 0.4.9/10, report themselves as: Environment Version: 4.0.30319.1 Even newer versions than MONO 4.3.2 require Vista (NT 6.0) at least. Although all mentioned setups up to MONO 4.0.3 32Bit (on ReactOS 0.4.7/8) or MONO 4.3.2 32Bit (on ReactOS 0.4.9/10) run through the complete setup procedure without any error message or warning, they do not install Gtk# correctly. This becomes obvious, if a separate Gtk# installation is made. All gtk-sharp-2.12.xx.msi setups (a good download source is, that currently provide gtk-sharp versions 2.12.20 to 2.12.45) report the same error: The procedure entry point if_nametoindex could not be located in the dynamic link library IPHLPAPI.DLL. (Only the gtk-sharp-2.99.3.msi setup doesn't report this error, but it installs the Gtk# 3 preview.) The incomplete installation of Gtk# 2.12.9 or higher (required for MonoDevelop-2.8.6.5.msi) prevents using MonoDevelop. Since none of the professional C# IDEs (SharpDevelop, Visual Studio, MonoDevelop) run on ReactOS 0.4.7, 0.4.8 and 0.4.9, I fall back to Notepad++ (version 6.9 can be installed from the ReactOS Applications Manager). After a plug in Manager update, the NppExec plugin can be installed. NppExec A very simple C# application shall be used to demonstrate the general operational capability. using System; using System.Reflection; namespace ConsoleApp01 { public class ConsoleApp01 { public static void Main(string[] args) { Console.WriteLine(""); Console.WriteLine(""); Console.WriteLine("Hello from ReactOS on " + Environment.MachineName + "!"); Console.WriteLine("OS Version: " + Environment.OSVersion); Console.WriteLine("Image runtime Version: " + Assembly.GetExecutingAssembly().ImageRuntimeVersion.ToString()); Console.WriteLine("Environment Version: " + Environment.Version.ToString()); Console.WriteLine("Setup information: " + AppDomain.CurrentDomain.SetupInformation); Console.WriteLine(""); Console.Write("Press any key to continue..."); Console.ReadKey(); } } } As a first step, I recommend to check/extend the Path variable. It can be edited via Start | Settings | Control Panel | System | Advanced | Environment Variables and should include ";C:\Program Files\Mono-3.2.3\bin;C:\Program Files\GtkSharp\2.12\bin" for MONO 3.2.3 or ";C:\Program Files\Mono\bin;C:\Program Files\GtkSharp\2.12\bin" for MONO 3.12.1 or higher. Path As a second step, I recommend to create a working folder C:\Documents and Settings\<user>\My Documents\.NET Apps, a project folder ConsoleApp01 and the source code file ConsoleApp01.cs. Within the Notepad++ (with NppExec plugin installed), the [F6] key opens the command window, where new command scripts can be created and saved as well as command scripts started. The command script I use is: NPP_SAVE CD $(CURRENT_DIRECTORY) C:\Program Files\Mono\lib\mono\4.5\mcs.exe "$(FILE_NAME)" Unhandled Exception: System.BadImageFormatException: Could not load file or assembly 'mcs, Version=4.0.3.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. File is corrupt. (Exception from HRESULT: 0x8013110E) ---> System.BadImageFormatException: File is corrupt. (Exception from HRESULT: 0x8013110E)<br /> --- End of inner exception stack trace --- This indicates a failed MONO installation. In this case, MONO should be uninstalled, the MONO installation folder should be deleted (or cleaned and renamed, if deletion fails due to file system errors) and MONO should be installed again. In my case, many files of the MONO installation either have not been installed or have corrupt file information. I can recommend to start ReactOS in debug mode to repair corrupt file information (even if the files are not repaired and must be installed again). The resulting application, executed with the MONO runtime, produces this output: Hello from ReactOS on REACTOS-047! OS Version: Microsoft Windows NT 5.2.3790.131072 Service Pack 2 Image runtime Version: v4.0.30319 Environment Version: 4.0.30319.17020 Setup information: System.AppDomainSetup Press any key to continue... The resulting application, executed with the Microsoft .NET 4.0 runtime, produces this output: Hello from ReactOS on REACTOS-047! OS Version: Microsoft Windows NT 5.2.3790 Service Pack 2 Image runtime Version: v4.0.30319 Environment Version: 4.0.30319.1 Setup information: System.AppDomainSetup Press any key to continue... Although there are a lot of pitfalls and restrictions, it is possible to compile and run .NET applications on React.
https://www.codeproject.com/Tips/1222079/Introduction-to-Csharp-on-ReactOS
CC-MAIN-2020-40
refinedweb
1,460
54.08
Document-Based Apps Using SwiftUI SwiftUI makes it easier than ever to create document-based apps that work with the iOS document interaction system. In this tutorial, you’ll learn how to create a SwiftUI document-based meme-maker app. Version - Swift 5, iOS 14, Xcode 12 Documents are central to computing, and SwiftUI document-based apps make it easier than ever to work with the iOS document interaction system, based on integrating your app with all the cool features of the Files app. In this tutorial, you’ll work on MemeMaker, an app that lets you create your own memes and persist them as their own meme document type. You’ll learn about the following topics: - What are Uniform Type Identifiers (UTI)? - Which components comprise SwiftUI document-based apps? - How do you define your own document with a unique extension? - How do you run SwiftUI document-based apps on iOS/iPadOS and macOS? Without further ado, it’s time to dive in. Getting Started Download the starter project by clicking the Download Materials button at the top or bottom of the tutorial. Build and run. This is what the app looks like: Tap the + button in the upper-right corner to create a new document. A text editor will open with “Hello, world!” shown. Change the text to SwiftUI rocks! and close the document by tapping the back button, which is in the upper-left corner. Switch to the Browse tab to find the document you just created. The tab looks like this: Open the new file by tapping it. The text editor opens and you can read the text you entered. This is a good starting point for creating an editor for memes. You’ll modify this app so that instead of working with raw text, it works with a meme document type. This is where UTIs come in. Defining Exported Type Identifiers A Unique Type Identifier or UTI is, in Apple’s words, a “unique identifier for a particular file type, data type, directory or bundle type.” For instance, a JPEG image is a particular file type, and it’s uniquely identified by the UTI string public.jpeg. Likewise, a text file written in the popular Markdown markup language is uniquely identified by the UTI net.daringfireball.markdown. What is the value of UTIs? Because UTIs are unique identifiers, they provide an unambiguous way for your app to tell the operating system what kind of documents it’s able to open and create. Since iOS doesn’t ship with built-in support for a “meme” document, you’ll add a new UTI to your app for meme files. This is straightforward in Xcode. Before diving into code, you need to make some changes to the project setup. Select the MemeMaker (iOS) target in your project settings, select the Info tab and expand the Exported Type Identifiers section. This is the place to define the type and metadata of your document. Currently, this is still set up for a text document. Make the following changes: - Change Description to A meme created with MemeMaker. You can see the description e.g. in the information window of Finder. - Change Identifier to com.raywenderlich.MemeMaker.meme. Other apps can use this identifier to import your documents. - Change Conforms to “public.data, public.content”. These are UTIs, and they describe the type of data your UTI is using. In programming parlance, you can think of these as the protocols your UTI conforms to. There are many types you can use, such as public.data or public.image. You’ll find a list of all available UTIs in Apple’s documentation or on Wikipedia. - Change Extension to meme. This is the .meme file extension that’s added to the documents you create with MemeMaker. Great! Now you’re ready to create documents with your new extension, .meme. :] Using a DocumentGroup DocumentGroup is a scene presenting the system UI for handling documents. You can see how it looks in the screenshots above. SwiftUI makes it super easy to use the document browser. All that’s needed is to follow the code found in MemeMakerApp.swift: DocumentGroup(newDocument: MemeMakerDocument()) { file in ContentView(document: file.$document) } DocumentGroup has two initializers when handling documents: init(newDocument:editor:) and init(viewing:viewer:). The first one allows you to create new documents and edit existing documents, while the second one is only able to view files. Because you want to create and edit memes, the starter project uses the first initializer. The initializer receives the document it should show. In this case, you’re initializing a new empty MemeMakerDocument, which you’ll work on later. The initializer also receives a closure that builds the file editing view. Working With a File Document FileDocument is the base protocol for a document that an app can read and write to the device. This protocol contains two static properties: readableContentTypes and writableContentTypes. Both are UTType arrays defining the types the document can read and write, respectively. Only readableContentTypes is required, because writableContentTypes defaults to readableContentTypes as well. FileDocument also requires an initializer taking a FileDocumentReadConfiguration. This configuration bundles a document’s type in the form of UTType, along with a FileWrapper containing its content. Finally, any class or struct conforming to FileDocument needs to implement fileWrapper(configuration:). It’s called when a document is written, and it takes a FileDocumentWriteConfiguration as a parameter, which is similar to the read configuration, but used for writing. This may sound like a lot of work, but don’t worry. In this section of the tutorial, you’ll look at how to use these two configurations. Defining Exported UTTypes Open MemeMakerDocument.swift. At the top of the file, you’ll find an extension on UTType that defines the type the starter project is using. Replace this extension with the following code: extension UTType { static let memeDocument = UTType( exportedAs: "com.raywenderlich.MemeMaker.meme") } In the code above, you’re defining memeDocument as a new UTType so that you can use it in the next step. Still in MemeMakerDocument.swift, find readableContentTypes. As mentioned before, this defines a list of UTTypes the app can read and write. Replace the property with this new code: static var readableContentTypes: [UTType] { [.memeDocument] } This sets the new type you created earlier as a type that MemeMakerDocument document can read. Since writableContentTypes defaults to readableContentTypes, you don’t need to add it. Creating the Data Model Before you can continue working on MemeMakerDocument, you need to define the meme it works with. Create a new Swift file called Meme.swift in the Shared group and select both checkboxes in Targets so it’ll be included in both the iOS and the macOS targets. Add the following code: struct Meme: Codable { var imageData: Data? var topText: String var bottomText: String } MemeMaker will save a Meme to disk. It conforms to Codable, so you can convert it to Data and back using JSONEncoder and JSONDecoder. It also wraps all the information needed to represent a Meme: two strings and an image’s data. Open MemeMakerDocument.swift again and find this code at the beginning of the class: var text: String init(text: String = "Hello, world!") { self.text = text } MemeMakerDocument can now hold the actual Meme instead of text. So replace these lines with the following code: // 1 var meme: Meme // 2 init( imageData: Data? = nil, topText: String = "Top Text", bottomText: String = "Bottom Text" ) { // 3 meme = Meme( imageData: imageData, topText: topText, bottomText: bottomText) } This is what’s happening in the code above: - This is the meme represented by an instance of MemeMakerDocument. - You define an initializer for MemeMakerDocument. The initializer receives the data for an image and both the top and bottom text. - Finally, you initialize a new Memegiven these parameters. At this point, you’ll see errors in your code. Don’t worry — there are a couple of additional changes you need to make to encode and decode a document when saving and loading a file. Encoding and Decoding the Document First, make a change to fileWrapper(configuration:). Replace the method body with these lines: let data = try JSONEncoder().encode(meme) return .init(regularFileWithContents: data) This converts the meme to data and creates a WriteConfiguration that the system uses to write this document to disk. Next, replace the body of init(configuration:) with the following code: guard let data = configuration.file.regularFileContents else { throw CocoaError(.fileReadCorruptFile) } meme = try JSONDecoder().decode(Meme.self, from: data) The app calls this initializer when an existing document is opened. You try to get the data from the given ReadConfiguration and convert it to an instance of Meme. If the process fails, the initializer will throw an error which the system deals with. You’ve now added support for reading and writing custom meme documents to your app. However, the user still can’t see any of this since you’re not showing a meme editor. You’ll solve that problem in the next section. Providing a Custom Editor Currently, the app uses a TextEditor. The template for SwiftUI document-based multi-platform apps starts with this view. It’s used to present editable and scrollable text. TextEditor isn’t suitable for creating and editing memes, so you’ll create your own view to edit a MemeMakerDocument. Before you start creating your new editor view, you’ll remove the old one. Open ContentView.swift and replace body with an empty view: Spacer() This makes sure you don’t get compiler errors while building up your new editor. Creating the Image Layer The editor will consist of two subviews. You’ll create these before creating the actual editor. The first one is ImageLayer, a view that’s representing the image. Create a new SwiftUI View file in Shared called ImageLayer.swift and select both checkboxes for MemeMaker (iOS) and MemeMaker (macOS) in Targets. Replace the two structs in the file with the following: struct ImageLayer: View { // 1 @Binding var imageData: Data? // 2 var body: some View { NSUIImage.image(fromData: imageData ?? Data()) .resizable() .aspectRatio(contentMode: .fit) } } // 3 struct ImageLayer_Previews: PreviewProvider { static let imageData = NSUIImage(named: "AppIcon")!.data static var previews: some View { ImageLayer(imageData: .constant(imageData)) .previewLayout(.fixed(width: 100, height: 100)) } } Here’s what the code above is doing: ImageLayerhas a SwiftUI binding to the meme image’s data. In a later step, MemeEditorwill pass the data to this view. - Its bodyconsists of an NSUIImage, a view you initialize with the image data. You may wonder what this view is. It’s a typealias for UIImageon iOS and NSImageon macOS, together with an extension. It allows for one common type for images, which has the same methods and properties on both platforms. You can find it in the NSUIImage_iOS.swift file in the iOS group and NSUIImage_macOS.swift in the macOS group. It uses the correct type depending on whether you’re running the MemeMaker (iOS) or MemeMaker (macOS). - Finally, you add a preview to support Xcode’s previewing feature. Take a look at the preview to make sure your view is showing an image: Now that you are showing the image, you can move on to showing the text! Creating the Text Layer TextLayer is the second subview, and it positions the top and bottom text above the image. Again, create a new SwiftUI View file in Shared and call it TextLayer.swift. Remember to check MemeMaker (iOS) and MemeMaker (macOS) as Targets. Replace the generated TextLayer struct with this: struct TextLayer<ImageContent: View>: View { @Binding var meme: Meme let imageContent: () -> ImageContent } TextLayer has two properties: meme, holding the Meme that’s shown; and imageContent. imageContent is a closure to create another view inside of TextLayer‘s body. Note that you declared the view as a generic struct where the the image content view can be anything that conforms to View. Next, add the body to the view: var body: some View { ZStack(alignment: .bottom) { ZStack(alignment: .top) { imageContent() MemeTextField(text: $meme.topText) } MemeTextField(text: $meme.bottomText) } } You use two ZStacks in body to place the top text at the top of the image and the bottom text at its bottom. To show the image, you call the closure passed to your TextLayer view. To show the text, you use MemeTextField, a normal TextField set up in your starter project to show formatted text. Finally, replace the preview with the following: struct TextLayer_Previews: PreviewProvider { @State static var meme = Meme( imageData: nil, topText: "Top Text Test", bottomText: "Bottom Text Test" ) static var previews: some View { TextLayer(meme: $meme) { Text("IMAGE") .frame(height: 100) } } } Take a look at the preview: Right now it’s not looking like much of a meme. Not to worry, in the next section, you’ll combine both the image and text layers to create MemeEditor. Creating a Meme Editor All the files you created before are independent of the platform. But MemeEditor will use different platform-specific methods to import images based on whether the app runs on iOS/iPadOS or macOS. In a later step, you’ll create another MemeEditor to show on macOS, but for now, start with the iOS and iPadOS version. Create a new SwiftUI view file, MemeEditor_iOS.swift. This time it shouldn’t be in the Shared group but in iOS. Remember to check only the MemeMaker (iOS) target. Replace the view in the file with the following code: struct MemeEditor: View { @Binding var meme: Meme @State var showingImagePicker = false @State private var inputImage: NSUIImage? } MemeEditor has a binding to the meme it presents together with two properties. You’ll use showingImagePicker to decide when to present the image picker that lets your user select an image. You will then store the image in inputImage. Next, add a new method to the struct to store the input image: func loadImage() { guard let inputImage = inputImage else { return } meme.imageData = inputImage.data } Now you can add the body inside the view: var body: some View { // 1 TextLayer(meme: $meme) { // 2 Button { showingImagePicker = true } label: { if meme.imageData != nil { ImageLayer(imageData: $meme.imageData) } else { Text("Add Image") .foregroundColor(.white) .padding() .background(Color("rw-green")) .cornerRadius(30) .padding(.vertical, 50) } } } // 3 .sheet(isPresented: $showingImagePicker, onDismiss: loadImage) { UIImagePicker(image: $inputImage) } } Here’s what’s going on in the body: - First, create a new TextLayerand pass both a binding to memeand a closure to create the ImageLayer. - In this closure, define a button that sets showingImagePickerto truewhen tapped. Use the ImageLayerdefined above as its label or show a button if the meme doesn’t yet contain an image. - Use sheetto show a UIImagePickerwhenever showingImagePickeris set to true. UIImagePickeris a wrapper around UIImagePickerControllerto make it usable with SwiftUI. It allows users to select an image from their device, and it calls loadImagewhenever the picker is dismissed. Next, replace the preview in the file with the following: struct MemeEditor_Previews: PreviewProvider { @State static var meme = Meme( imageData: nil, topText: "Top Text Test", bottomText: "Bottom Text Test" ) static var previews: some View { MemeEditor(meme: $meme) } } Your preview should now show a test of your view: Finally, open ContentView.swift. Replace the contents of body with the following code, which is a dedicated meme editor as opposed to a text editor: MemeEditor(meme: $document.meme) Here you replaced TextEditor with the new MemeEditor. You pass the document’s meme to MemeEditor, letting user manipulate and work on a meme. Finally, after all this coding, MemeMaker is ready to run on an iPhone! Select the MemeMaker (iOS) scheme and build and run. Create a new document, which looks like this: Now you can choose a funny image, add some text and improve your meme-making skills. Try to create a funny meme like this one: Good work! :] Using the App on macOS A big advantage of SwiftUI is that you can use it on all Apple platforms. But although you used NSUIImage, there are still some changes you need to make before you can run MemeMaker on macOS. Implementing a MemeEditor for macOS Because MemeEditor uses UIImagePickerController, you can’t use it on macOS. Instead, you’ll create another version of MemeEditor that’s used when running the app on macOS. It’ll use NSOpenPanel to let the user select an image as the background of the meme. But thanks to SwiftUI, most of the views can stay the same. You can reuse both ImageLayer and TextLayer. The only difference is how the user selects an image. Create a new SwiftUI View file in the macOS group and call it MemeEditor_macOS.swift. Only check the MemeMaker (macOS) target. Replace the contents of this file with the following code: import SwiftUI struct MemeEditor: View { @Binding var meme: Meme var body: some View { VStack { if meme.imageData != nil { TextLayer(meme: $meme) { ImageLayer(imageData: $meme.imageData) } } Button(action: selectImage) { Text("Add Image") } .padding() } .frame(minWidth: 500, minHeight: 500) } func selectImage() { NSOpenPanel.openImage { result in guard case let .success(image) = result else { return } meme.imageData = image.data } } } Here, you create a similar view to the one you created earlier for iOS. This time, though, you add a separate button to call selectImage. selectImage uses NSOpenPanel to let your user pick an image. If the selection succeeds, you store the new image data in the meme. Finally, add a preview to the bottom of the file: struct MemeEditor_Previews: PreviewProvider { @State static var meme = Meme( imageData: nil, topText: "Top Text", bottomText: "Bottom Text" ) static var previews: some View { MemeEditor(meme: $meme) } } Build and run. (You’ll need macOS 11.0 or higher.) This is what the app looks like: You can create the same meme on macOS: Without any extra work, the Mac app already has a working menu with shortcuts. For example, you can use Command-N to create a new document and Command-S to save the document, or you can undo your last change with Command-Z. Isn’t it amazing how easy it is to create an app that uses documents and runs on both iOS and macOS? :] Where to Go From Here? You can download the completed project files by clicking the Download Materials button at the top or bottom of this tutorial. Documents are a central part of many good apps. And now with SwiftUI, it’s even easier to build document-based apps for iOS, iPadOS and macOS. If you want to dive deeper into SwiftUI document-based apps, see Apple’s Build document-based apps in SwiftUI video. For more information about SwiftUI, check out the SwiftUI: Getting Started tutorial or the SwiftUI by Tutorials book. To create a document-based UIKit app, you’ll find more information in the Document-Based Apps Tutorial: Getting Started article. We hope you enjoyed this tutorial. If you have any questions or comments, please join the forum discussion below!
https://www.raywenderlich.com/14971928-document-based-apps-using-swiftui
CC-MAIN-2021-17
refinedweb
3,124
57.37
Automating Exchange 2000 Management with Windows Script Host The purpose of this white paper is to demonstrate scripting techniques for Microsoft® Exchange 2000 Server management, using Windows Script Host (WSH) and Exchange 2000 components. The document presents two advanced WSH scripts based on the new Exchange 2000 COM technologies. These scripts perform automated management tasks in a Microsoft® Windows® 2000 and Exchange 2000 environment. On This Page Introduction Scriptable Windows 2000 Components Exchange 2000 Components Accessible with Scripts Exchange 2000 Architecture Overview Logical View of Exchange 2000 COM Components Exploring the Exchange 2000 COM Logical View with Scripts Advanced Script Samples Conclusion Appendix Introduction System management is important for ensuring system availability. In the early days of networked PCs, managing a system consisted of simple tasks such as making backups and monitoring free disk space. These early systems ran only 10 or 20 users on a local LAN at 1 megabit per second. A single administrator was able to manage a system easily. As technology advances, administrators face increasingly complicated tasks. Current systems must be up and running 24 hours a day, 365 days a year. Many companies provide e-mail systems that are connected to the external world. With the growth of the Internet and e-commerce, administrators are now responsible for thousands of users distributed on different systems around the world. This evolution demands that network operating systems offer more features to address business needs. Microsoft® Exchange 2000 Server addresses some of these needs, but large infrastructures continue to ask for added functionality. Administrators today do most of their administration with automated tasks. Because an administrator is not a developer, automation methods should be easy to use. Scripting languages such as JavaScript, Microsoft® Visual Basic®, Scripting Edition (VBScript), and Perl are quick and effective ways to write logic for network operating systems. Therefore, system structures should be accessible from the scripting environment. With the release of Microsoft® Windows® 2000 and Exchange 2000, Microsoft has provided an enormous set of new features accessible from the scripting environment. These features enable you to use, create, and extend the basic management functions included in the products. Functions are accessed through scriptable COM objects that are available from any automation language. This white paper illustrates how Exchange 2000 technologies can be helpful for management purposes. It examines existing scriptable COM technologies and provides server-centric samples developed under Windows Script Host (WSH) to help users with Exchange 2000 management tasks. Note To automate Exchange 2000 with scripts, you should have a working understanding of the following technologies: Windows 2000 global architecture Microsoft® Active Directory® directory service Exchange 2000 global architecture VBScript or JavaScript, especially from the WSH environment Active Directory Service Interfaces (ADSI) Windows Management Instrumentation (WMI) Microsoft ActiveX® Data Objects (ADO) Scriptable Windows 2000 Components Exchange 2000 is tightly integrated with Windows 2000. Because a Windows 2000 network is the foundation for Exchange 2000, it is important to understand which components of Windows 2000 are involved with Exchange 2000. Windows 2000 Active Directory, Windows Management Instrumentation (WMI), and Internet Information Services (IIS) are some examples of Windows 2000 components used by Exchange 2000. From Windows 2000, three important COM technologies are essential for Exchange 2000 management: Active Directory Service Interfaces (ADSI), WMI, and ActiveX Data Objects (ADO). Active Directory Service Interfaces ADSI is a COM scriptable technology that is part of Windows 2000 Professional and Windows 2000 Server. ADSI is an application programming interface (API) into Active Directory that enables applications to access, create, and modify Active Directory objects. For example, ADSI allows an administrator to write a script to create user objects in bulk in Active Directory. Managing Exchange 2000 often requires the use of ADSI because Active Directory is the underlying directory for Exchange 2000. For more information about the ADSI.User object class, see. Windows Management Instrumentation Windows Management Instrumentation (WMI) is the Microsoft implementation of Web-based Enterprise Management (WBEM). WMI provides a uniform way of accessing information on Windows systems. WMI enables systems, applications, networks, and other managed components to be represented using the Common Information Model (CIM). Because Exchange 2000 exposes management information through WMI and relies on the base features available in Windows 2000, WMI is a great way to manage server information such as CPU usage per process, disk space, available memory, Exchange connector state, queue information, Exchange server state, Exchange services, store file sizes, and so on. WMI offers a scripting interface that provides easy access to management information. For more information about WMI and its architecture, see the MSDN Library at. ActiveX Data Objects ADO enables client applications to access and manipulate data from a database server through an OLE DB provider. Under Exchange 5.5, the Information Store is not accessible with ADO because Exchange 5.5 does not have an OLE DB provider. Exchange 2000 implements an OLE DB provider for the Exchange store, also known as the Web Storage System. ADO 2.5 should be used to access Exchange 2000 stores. For more information about ADO and its architecture, see the MSDN Library at. Exchange 2000 Components Accessible with Scripts The installation of Exchange 2000 adds features and modifies existing features in the Windows 2000 base operating system. Some of these changes are dedicated to Exchange 2000 management. Active Directory Schema The first Exchange 2000 installation in an organization creates a set of new classes and properties in Microsoft Active Directory to support Exchange 2000-specific objects. This change is realized throughout the organization. After the schema is modified, the Exchange 2000 installation process creates a specific container in the Active Directory Configuration Naming Context to hold the configuration data of the Exchange organization. For more information about Active Directory and Active Directory schema, see the MSDN Library at. Exchange WMI Providers Exchange 2000 adds three new WMI providers to enhance Exchange manageability. These providers create a new namespace in the WMI Common Information Model (CIM) repository and add five new WMI classes. Each class is related to a different part of Exchange 2000. The WMI Exchange Routing Table provider creates the ExchangeServerState and ExchangeConnectorState classes. The WMI Exchange Queue provider creates the ExchangeLink and ExchangeQueue classes. The WMI Exchange Cluster provider creates the ExchangeClusterResource class. Two more WMI providers are added with Exchange 2000 Server Service Pack 2. The WMI Exchange Message Tracking provider creates the Exchange_MessageTrackingEntry class. The WMI Exchange DS Access provider creates the Exchange_DSAccessDC class. For more information about the WMI Exchange providers, see "An Overview of Exchange 2000 WMI Providers" in the Appendix. CDO for Exchange 2000 CDO for Exchange 2000 (CDOEX) is a new version of CDO built to use specific features related to Exchange 2000, such as Internet Standards and the Exchange store (through the Exchange OLE DB provider). CDOEX relies only on the use of Internet standard protocols and does not use MAPI. When Exchange 2000 is installed, CDO for Windows 2000 (CDOSYS) is upgraded to CDOEX. CDO for Exchange 2000 is a superset of CDO for Windows 2000. The difference is that CDOEX brings additional functionality related to Exchange 2000. For more information about CDOSYS and CDOEX, see the MSDN Library at. CDO for Exchange Management CDO for Exchange Management (CDOEXM) provides objects and interfaces for the management of many Exchange 2000 components. For example, CDOEXM can configure Exchange servers and stores, mount and dismount stores, and create and configure mailboxes. This document examines the various capabilities of CDOEXM. CDOEXM acts as an extension for ADSI and CDOEX to make it easier to create and access mailbox definitions stored in Active Directory and in the stores. At the server level, CDOEXM allows you to retrieve specific information about the server itself, as well as about storage groups and their contents. CDOEXM is an important companion to ADSI and CDOEX when working with Exchange 2000. For more information about CDOEXM, see "Exchange 2000 Architecture Overview," later in this document. Exchange OLE DB Provider OLE DB allows applications to uniformly access data stored in diverse information sources. It supports the amount of Database Management System (DBMS) functionality appropriate to the data source, enabling it to share its data. Exchange 2000 supports OLE DB for Documents interfaces. OLE DB for Documents is a collection of OLE DB interfaces that allow applications to traverse folders and documents, using Uniform Resource Locators (URLs). OLE DB for Documents is the preferred method for applications to access the Exchange store. There are two ways in which Exchange 2000 provides OLE DB 2.5 support. The first is by using Microsoft Internet Publishing Provider (MSDAIPP), which accesses Exchange 2000 Server using World Wide Web Distributed Authoring and Versioning (WebDAV). The second way in which Exchange 2000 provides OLE DB 2.5 support is by having a native OLE DB 2.5 provider for the Exchange store. The primary purpose of this provider is to achieve better performance than is possible over WebDAV. This provider accesses the Exchange store directly (through COM) rather than making a round trip through WebDAV. Office 2000, CDO for Exchange 2000, and CDO for Exchange Management use the OLE DB COM provider. For more information about the Exchange OLE DB provider and the Exchange store, see the MSDN Library at. Installable File System The Exchange store can also be accessed through the Installable File System (IFS). The Exchange store exists as a file system folder on drive M of the Exchange 2000 server. Drive M can be shared like any other drive. The default names for the first public and mailbox stores are respectively "Public Folders" and "MBX". There is no COM abstraction mechanism to access properties for an item in IFS directly. To access item properties with a COM abstraction technique, use the WebDAV protocol, the Exchange OLE DB (ExOLEDB) provider with ADO 2.5, CDO for Exchange 2000 (CDOEX), or the Messaging API (MAPI). For more information about NTFS and the Installable File System, see the Exchange SDK at. Exchange 2000 Architecture Overview Exchange 2000 offers various technologies to access its components. Figure 1 represents a simplified view of the interaction and relationships between these technologies. There are two methods of client/server access to the Exchange store. For application creation and management, the Exchange store can be accessed with CDO 1.21 through MAPI. This method provides backward compatibility. The second method uses WebDAV and is based on Internet standard protocols as opposed to MAPI, which uses Remote Procedure Calls (RPC). WebDAV is the recommended method. Note Figure 1 does not include WMI components because there is no direct interaction between WMI and the Exchange store. See "An Overview of Exchange 2000 WMI Providers" in the Appendix for more information about Exchange 2000 WMI providers. IFS is not represented because it is a file system component of the Exchange store, rather than a COM component. If the application is running locally on an Exchange 2000 server, the Exchange store can also be accessed through the Exchange OLE DB (ExOLEDB) provider. Another way to access the Exchange store and abstract the Exchange OLE DB provider is to use ADO 2.5. With the help of the ExOLEDB provider, ADO 2.5 offers a navigation model that allows exploration of the Exchange store. This is the easiest COM technology to use to explore mailboxes and public folder hierarchies. ADO 2.5 implements features such as: The GetChildren method to return a RecordSet of items in a folder (see Sample 21). The MoveRecord and CopyRecord methods to allow move and copy operations in a folder tree. ADO 2.5 does not allow you to examine the content of an item. CDOEX, however, implements the business logic of collaboration-related items such as e-mail messages, folders, calendaring items, contacts, and so on. ADO and CDOEX are designed to work together. ADO explores the folder tree, and CDOEX manipulates item content. On top of CDOEX, Exchange 2000 implements CDO for Exchange Management (CDOEXM). CDOEXM is a COM technology to facilitate management tasks. CDOEXM uses objects instantiated with ADSI and CDOEX to manage operations between the stores and Active Directory. There is an important distinction between the purpose of CDOEXM and the purpose of CDOEX. CDOEXM is made for the management of Exchange 2000 component containers: servers, stores, and mailboxes. CDOEX manipulates the content of these containers: messages, contacts, calendar items, and so on. The admin logic layer implements business logic related to administration so that CDOEXM and the Exchange System Manager (ESM) user interface exhibit consistent behavior. The admin logic layer is not accessible from the application level. How Is CDOEX Related to ADO? CDOEX provides a convenient object model for managing folders, messages, appointments, contacts, and other items, as well as the properties of those items. CDOEX accesses data through OLEDB 2.5 interfaces and is specifically designed to operate with the Exchange store. CDOEX integrates with ADO 2.5 to provide a consistent data-access interface to the Exchange store and Active Directory. When assigning values to CDOEX properties, CDOEX saves the data to the correct location (either the Exchange store or Active Directory). CDOEX objects such as Folder, Person, and many others provide a Fields collection on the default interface, allowing direct access to the properties of items in the Exchange store. See the Exchange SDK for more information about the Fields collection. CDOEX properties provide essential functionality for collaborative applications using the Exchange store. There are situations, however, when using ADO (see Sample 21) or OLE DB may be more appropriate. When an item is accessed using an ADO Record object, all of its data, including the stream, is presented in a Fields collection. The stream, which is defined in the ADO 2.5 specification, is accessed using the ADO Stream object from the GetStream method or from within the Fields collection (see Sample 6 to explore the content of the stores and associated Fields collection). However, ADO assumes no correlation between the item properties and the default stream. How Is CDOEXM Related to ASDI and CDOEX? CDOEX and ADSI objects are designed to work in tandem. You can use CDOEX or ADSI objects to retrieve information about users and their Exchange 2000 mailboxes (see Sample 11 with ADSI and Sample 13 with CDOEX). You can use the same logic to create or delete users and their mailboxes, and to manage user and contact information in Active Directory. Configuration information resides primarily in Active Directory. ADSI is a generic API into Active Directory that has no functions specifically for managing Exchange data in Active Directory. ADSI cannot access data in the Exchange store. CDOEXM encapsulates and simplifies Exchange 2000 management tasks so that data in Active Directory and resources in the Exchange store can be managed. CDOEXM relies on the admin logic layer, which contains business logic for Exchange 2000 management. There is usually no need to work at the granular level of Active Directory using ADSI unless CDOEXM does not implement logic for a particular management task. In the same way, CDOEX has no specific function for managing Active Directory or the Exchange data contained in Active Directory. CDOEX uses ADSI and CDOEXM to perform tasks in both Active Directory and the Exchange store. The following paragraph provides an example. To create a user with an associated mailbox on an Exchange 2000 server, an ADSI.User object must first be instantiated to create the user in Active Directory. The piece of code then uses CDOEXM to create the user's mailbox in the Exchange store (see Sample 10 and Sample 11). Existing mailboxes are retrieved with the same process. With CDOEX, the logic is exactly the same (see Sample 12 and Sample 13), but in this case a CDO.Person object is instantiated. When CDOEXM creates an Exchange mailbox (or retrieves mailbox information), it automatically sets (or gets) the properties in the Exchange store to associate the mailbox with the user in Active Directory. CDOEXM acts as an extension to ADSI and CDOEX. Logical View of Exchange 2000 COM Components This section presents a logical view of the various COM technologies used to retrieve and manage information in Exchange 2000. The logical structure of Exchange 2000 is hierarchical. The server, which is at the top of the hierarchy, contains storage groups. Storage groups contain mailbox stores and public stores. Mailbox stores contain mailboxes, and mailboxes contain folders. Public stores contain folders as well. Mailbox and public folders hold various types of objects (documents, schedule information, messages, and so on). These objects can be further divided into parts. For example, body text and attachments are parts of an e-mail message. Exchange 2000 uses a specific technology to access data at each level. To manage Exchange 2000 with scripts, you should understand which COM technology to use to access each kind of data. Figure 3 maps the Exchange 2000 object hierarchy with COM technologies and their uses. An Exchange 2000 server can have several storage groups, each of which can contain many mailbox and public stores. For the sake of simplicity, Figure 3 shows only one storage group, one mailbox store, and one public store. On the left and right sides, large vertical arrows show the paths from the organization level to the message level of the hierarchy. Each arrow represents an access method: ADSI is on the left with steps marked with numbers and CDOEX is on the right with steps marked with letters. The smaller numbered and lettered arrows connect each Exchange 2000 component with the technology or technologies (WMI, CDOEX, ADSI, and so on) used to access it. You can retrieve information from each COM component at each level of this logical view. Later in this document, the EnumE2KinXL.wsf script is presented piece by piece to show you how to do this. Sample 6 through Sample 24 constitute the complete EnumE2KinXL.wsf script. The next section explores the Exchange 2000 object hierarchy using sample scripts. The purpose of the section is to increase your understanding of Exchange 2000 COM technologies and their interactions. COM objects with Exchange 2000 management capabilities are marked and summarized as "Management Points." Exploring the Exchange 2000 COM Logical View with Scripts Retrieving the List of Exchange 2000 Servers in the Organization To manage systems effectively, administrators need information updated constantly. For Exchange 2000 administrators, this might mean retrieving a list of Exchange servers in the organization and information about those servers. One way to do this is with an ADSI query in the Active Directory Configuration Naming Context, but ADSI does not contain business logic specifically related to Exchange 2000 servers. With ADSI, a script must retrieve server information property by property from Active Directory. This presents a problem: How does the script know which properties to read and how to interpret their values? Another problem is that ADSI doesn't provide real-time status or "health" information about the Exchange 2000 server. (That is, is it up or down?) Retrieving information with WMI solves these problems. Using WMI to Retrieve Exchange 2000 Server States This section refers to points 1 and A in Figure 3. Exchange 2000 Setup adds three providers to the WMI Common Information Model (CIM) Repository. The first is the WMI Exchange Routing Table provider, which defines the following WMI classes: ExchangeServerState: This class retrieves information about the status of relevant services, including memory used, processor type, disk space available, and the mail queue of Exchange 2000 servers in an organization. The ExchangeServerState class works in conjunction with the monitoring configuration defined by the Exchange System Manager (ESM). ExchangeConnectorState: This WMI class retrieves information about the Exchange 2000 mail connectors. Both classes retrieve their information from the routing table. This information is available to administrators on any Exchange 2000 server in an organization. For more information about the Exchange Routing Table provider, see "The Exchange 2000 Routing Table Provider" in the Appendix. The second WMI provider is the Exchange Queue provider, which defines the following WMI classes: ExchangeQueue: This class retrieves information through the queue API about the existing queues on an Exchange 2000 server. ExchangeLink: This class retrieves information through the queue API about the existing links on an Exchange 2000 server. For more information about the Exchange Queue provider, see "The Exchange 2000 WMI Queue Provider." Finally, Exchange 2000 adds the WMI Exchange Cluster provider, which defines the following WMI class: ExchangeClusterResource: This class retrieves information about Exchange 2000 clustered resources through the cluster API. For more information about the ExchangeClusterResource provider, see "The Exchange 2000 WMI Cluster Provider" in the Appendix. The script samples in the next section demonstrate how to use these three WMI providers from WSH. The first sample uses the ExchangeServerState WMI class and will be reused later as the basis for a more complex script (EnumE2KinXL.wsf). Using ExchangeServerState from Windows Script Host Sample 1 shows how the ExchangeServerState WMI class can be used from WSH. Lines 11 to 13 define three constants to be used in the WMI moniker (lines 24 to 26). The cComputerName constant is the Exchange 2000 server to be contacted. The cWMINameSpace is the Exchange 2000 WMI namespace. The cWMIinstance constant provides the class name to be instantiated at lines 24 to 26, in this case the ExchangeServerState ..: ..: ..: After the instantiation is complete, the returned object is a collection of Exchange servers, known in cComputerName. The "For Each" loop (lines 28 to 59) enumerates all the servers in this collection. For each server found, a set of properties and their states is retrieved from the Exchange 2000 routing table (lines 30 to 58) with the help of the Exchange Routing Table WMI provider. The routing table includes data about all of the servers and connectors in the organization. The ExchangeServerState class reads this data. Using ExchangeConnectorState from WSH To retrieve connector states with the ExchangeConnectorState class, substitute the ExchangeConnectorState class name in line 13 (see Sample 2). This script retrieves an organization-wide view of all the connectors and their states, as seen by the cComputerName used to instantiate the class. Sample 2 Using the ExchangeConnectorState WMI class 1:' VBScript script listing all ExchangeConnectorState names and properties 2:' available with the WMI Exchange 2000 provider. .: 9:Option Explicit 10: 11:Const cComputerName = "LocalHost" 12:Const cWMINameSpace = "root/cimv2/applications/exchange" 13:Const cWMIInstance = "ExchangeConnectorState" ..: ..: 24:Set ExchangeConnectorList = _ GetObject("winmgmts:{impersonationLevel=impersonate}!//"& 25: cComputerName & "/" & _ 26: cWMINameSpace).InstancesOf(cWMIInstance) 27: 28:For each ExchangeConnector in ExchangeConnectorList 29: WScript.Echo "---------------------------------------------" 30: WScript.Echo "DN: " & ExchangeConnector.Dn 31: WScript.Echo "Name: " & ExchangeConnector.Name 32: WScript.Echo "GUID: " & ExchangeConnector.Guid 33: WScript.Echo "GroupDN: " & ExchangeConnector.GroupDN 34: WScript.Echo "IsUP: " & ExchangeConnector.IsUP 35:Next ..: ..: ..: Using ExchangeLink from WSH The logic for using the ExchangeLink class is the same as that in Sample 2. In this case, Line 13 instantiates the ExchangeLink class. Instances of this class represent the links (objects under the queues container in ESM) that exist on the specified cComputerName (line 11) only. This WMI class does not work with the routing table and therefore does not offer an organization-wide view of the links. Sample 3 Using the ExchangeLink WMI class 1:' VBScript script listing all ExchangeLink names and properties 2:' available with the WMI Exchange 2000 provider. .: 9:Option Explicit 10: 11:Const cComputerName = "LocalHost" 12:Const cWMINameSpace = "/root/cimv2/applications/exchange" 13:Const cWMIInstance = "ExchangeLink" ..: ..: 24:Set ExchangeLinkList = _ GetObject("winmgmts:{impersonationLevel=impersonate}!//"& 25: cComputerName & "/" & _ 26: cWMINameSpace).InstancesOf(cWMIInstance) 27: 28:For each ExchangeLink in ExchangeLinkList 29: WScript.Echo "---------------------------------------------" 30: WScript.Echo "LinkName: " & ExchangeLink.LinkName 31: WScript.Echo " ProtocolName: " & ExchangeLink.ProtocolName 32: WScript.Echo " VirtualServerName: " & ExchangeLink.VirtualServerName 33: WScript.Echo " VirtualMachine: " & ExchangeLink.VirtualMachine 34: WScript.Echo " Version: " & ExchangeLink.Version 35: WScript.Echo " NumberOfMessages: " & ExchangeLink.NumberOfMessages 36: WScript.Echo " NextScheduledConnection: " & _ ExchangeLink.NextScheduledConnection 37: WScript.Echo " OldestMessage: " & ExchangeLink.OldestMessage 38: WScript.Echo " SizeOfQueue: " & ExchangeLink.SizeOfQueue 39: WScript.Echo " LinkDN: " & ExchangeLink.LinkDN 40: WScript.Echo " ExtendedStateInfo: " & ExchangeLink.ExtendedStateInfo 41: WScript.Echo " IncreasingTime: " & ExchangeLink.IncreasingTime 42: WScript.Echo " StateFlags: 0x" & Hex (ExchangeLink.StateFlags) 43: WScript.Echo " StateActive: " & ExchangeLink.StateActive 44: WScript.Echo " StateReady: " & ExchangeLink.StateReady 45: WScript.Echo " StateRetry: " & ExchangeLink.StateRetry 46: WScript.Echo " StateScheduled: " & ExchangeLink.StateScheduled 47: WScript.Echo " StateRemote: " & ExchangeLink.StateRemote 48: WScript.Echo " StateFrozen: " & ExchangeLink.StateFrozen 49: WScript.Echo " TypeRemoteDelivery: " & _ ExchangeLink.TypeRemoteDelivery 50: WScript.Echo " TypeLocalDelivery: " & ExchangeLink.TypeLocalDelivery 51: WScript.Echo " TypePendingRouting: " & _ ExchangeLink.TypePendingRouting 52: WScript.Echo " TypePendingCategorization: " & _ 53: ExchangeLink.TypePendingCategorization 54: WScript.Echo " TypeCurrentlyUnreachable: " & _ ExchangeLink.TypeCurrentlyUnreachable 55: WScript.Echo " TypeDeferredDelivery: " & _ ExchangeLink.TypeDeferredDelivery 56: WScript.Echo " TypeInternal: " & ExchangeLink.TypeInternal 57: WScript.Echo " SupportedLinkActions: " & _ ExchangeLink.SupportedLinkActions 58: WScript.Echo " ActionKick: " & ExchangeLink.ActionKick 59: WScript.Echo " ActionFreeze: " & ExchangeLink.ActionFreeze 60: WScript.Echo " ActionThaw: " & ExchangeLink.ActionThaw 61:Next ..: ..: ..: Using ExchangeQueue from WSH The logic here is the same as in the previous samples. In Sample 4, the class to be instantiated at line 13 is the ExchangeQueue. Note that this script only retrieves the instances of the queues on the specified cComputerName (line 11). Like ExchangeLink, the ExchangeQueue class does not work with the routing table. Sample 4 Using the ExchangeQueue WMI class 1:' VBScript script listing all ExchangeQueue names and properties 2:' available with the WMI Exchange 2000 provider. .: 9:Option Explicit 10: 11:Const cComputerName = "LocalHost" 12:Const cWMINameSpace = "root/cimv2/applications/exchange" 13:Const cWMIInstance = "ExchangeQueue" ..: ..: 24:Set ExchangeServerQueueList= _ GetObject("winmgmts:{impersonationLevel=impersonate}!//"& 25: cComputerName & "/" & _ 26: cWMINameSpace).InstancesOf(cWMIInstance) 27: 28:For each ExchangeServerQueue in ExchangeServerQueueList 29: WScript.Echo "---------------------------------------------" 30: WScript.Echo "QueueName: " & ExchangeServerQueue.QueueName 31: WScript.Echo " ProtocolName: " & ExchangeServerQueue.ProtocolName 32: WScript.Echo " VirtualServerName: " & _ ExchangeServerQueue.VirtualServerName 33: WScript.Echo " QueueName: " & ExchangeServerQueue.QueueName 34: WScript.Echo " VirtualMachine: " & _ ExchangeServerQueue.VirtualMachine 35: WScript.Echo " Version: " & ExchangeServerQueue.Version 36: WScript.Echo " NumberOfMessages: " & _ ExchangeServerQueue.NumberOfMessages 37: WScript.Echo " SizeOfQueue: " & ExchangeServerQueue.SizeOfQueue 38: WScript.Echo " IncreasingTime: " & _ ExchangeServerQueue.IncreasingTime 39: WScript.Echo " MsgEnumFlagsSupported: 0x" & _ 40: Hex (ExchangeServerQueue.MsgEnumFlagsSupported) 41: WScript.Echo " GlobalStop: " & ExchangeServerQueue.GlobalStop 42: WScript.Echo " CanEnumFirstNMessages: " & _ 43: ExchangeServerQueue.CanEnumFirstNMessages 44: WScript.Echo " CanEnumSender: " & ExchangeServerQueue.CanEnumSender 45: WScript.Echo " CanEnumRecipient: " & _ ExchangeServerQueue.CanEnumRecipient 46: WScript.Echo " CanEnumLargerThan: " & _ ExchangeServerQueue.CanEnumLargerThan 47: WScript.Echo " CanEnumLargerThan: " & _ ExchangeServerQueue.CanEnumOlderThan 48: WScript.Echo " CanEnumFrozen: " & ExchangeServerQueue.CanEnumFrozen 49: WScript.Echo " CanEnumNLargestMessages: " & _ 50: ExchangeServerQueue.CanEnumNLargestMessages 51: WScript.Echo " CanEnumNOldestMessages: " & _ 52: ExchangeServerQueue.CanEnumNOldestMessages 53: WScript.Echo " CanEnumFailed: " & ExchangeServerQueue.CanEnumFailed 54: WScript.Echo " CanEnumAll: " & ExchangeServerQueue.CanEnumAll 55: WScript.Echo " CanEnumInvertSense: " & _ ExchangeServerQueue.CanEnumInvertSense 56:Next ..: ..: ..: Using ExchangeClusterResource from WSH The ExchangeClusterResource class retrieves the status of a clustered Exchange resource such as a virtual server name. Except for this point and the list of properties available from this class, the script logic is exactly the same as the previous samples. Like ExchangeLink and ExchangeQueue, this WMI class does not work with the routing table and therefore does not offer an organization-wide view of the clustered resources. Sample 5 Using the ExchangeClusterResource WMI class 1:' VBScript script listing all ExchangeClusterResource names and properties 2:' available with the WMI Exchange 2000 provider. .: 9:Option Explicit 10: 11:Const cComputerName = "LocalHost" 12:Const cWMINameSpace = "/root/cimv2/applications/exchange" 13:Const cWMIInstance = "ExchangeClusterResource" ..: ..: 24:Set ExchangeServerClusterList = _ 25: GetObject("winmgmts:{impersonationLevel=impersonate}!//" & _ 26: cComputerName & "/" & _ 27: cWMINameSpace).InstancesOf(cWMIInstance) 28: 29:For each ExchangeServerCluster in ExchangeServerClusterList 30: WScript.Echo "---------------------------------------------" 31: WScript.Echo "VirtualMachine:" & ExchangeServerCluster.VirtualMachine 32: WScript.Echo "Name: " & ExchangeServerCluster.Name 33: WScript.Echo "Type: " & ExchangeServerCluster.Type 34: WScript.Echo "State: " & ExchangeServerCluster.State 35:Next ..: ..: ..: A Script to Explore the Exchange 2000 Logical View Samples 1 through 5 show how to access the Exchange 2000 WMI classes. Sample 1 can be used as a base to build a more complex script that retrieves the complete set of information shown in Figure 3. The EnumE2KinXL.wsf script uses Exchange 2000 COM components to load the information into a Microsoft® Excel spreadsheet. This script acts as a "browser" or an "explorer" for Exchange 2000. Each COM component provides its own set of properties and methods to manage Exchange 2000. This script illustrates how to combine the technologies and how to reuse a set of information retrieved from one COM object type with another COM object type. The EnumE2KinXL.wsf script is the basis of all subsequent scripts because it provides a detailed example of how to use scripting for Exchange 2000 management. It shows how to access a specific Exchange 2000 component, what features the component offers, and how to complete each step as shown in Figure 3. Each step has a corresponding sample (Sample 7 through Sample 24). Sample 6 Loading the object hierarchy 1:<!-- VBScript script loading the complete Exchange 2000 object --> 2:<!-- hierarchy into an Excel sheet using WMI, CDOEXM, ADSI, --> 3:<!-- CDO, and ADO (with values of different properties). --> ..: 10:<job> 11: <script language="VBScript" src="GetMSExchangeOrgFunction.vbs" /> 12: 13: ' Here you can use CDOEX or ADSI to enumerate mailbox properties. 14: <script language="VBScript" _ 15: 16:<!-- <script language="VBScript" _ 17: --> 18: 19: <script language="VBScript" _ 20: 21: <script language="VBScript" _ 22: 23: <script language="VBScript" _ 24: 25: <script language="VBScript" _ 26: 27: 28: <script language="VBScript"> 29: 30: Option Explicit 31: 32: Const cEnumMaiboxTree = True 33: Const cEnumFolderTree = True 34: Const cEnumFieldsCollection = False 35: Const cEnumMessageContent = False 36: 37: 38: Const cComputerName = "LocalHost" 39: Const cWMINameSpace = "/root/cimv2/applications/exchange" 40: Const cWMIInstance = "ExchangeServerState" ..: ..: 69: ' ---------------------------------------------------------------- 70: ' This script must be run locally to the Exchange server. 71: Set WNetwork = Wscript.CreateObject("Wscript.Network") 72: strCurrentComputerName = WNetwork.ComputerName 73: strCurrentUserName = WNetwork.UserName 74: Wscript.DisconnectObject (WNetwork) 75: Set WNetwork = Nothing 76: 77: ' ---------------------------------------------------------------- 78: ' Getting the current default domain (DN and FQDN). 79: 80: Set objRoot = GetObject("LDAP://RootDSE") 81: strDefaultDomainNC = objRoot.Get("DefaultNamingContext") 82: Set objRoot = Nothing 83: 84: ' Bind to the root domain to get its canonical name for UPN. 85: Set objDefaultDomainNC = GetObject("LDAP://" & strDefaultDomainNC) 86: 87: ' Retrieve a constructed property. 88: ' First do a GetInfoEx (for UPN construction). 89: objDefaultDomainNC.GetInfoEx Array("canonicalName"), 0 90: strCanonicalNameDefaultDomain = _ objDefaultDomainNC.Get("canonicalName") 91: 92: Set objDefaultDomainNC = Nothing ..: ..: 97: ' Bind to an Excel worksheet object. 98: Set objXL = WScript.CreateObject("EXCEL.application") 99: 100: ' Make it visible. 101: objXL.Visible = True 102: 103: ' Open Excel and start an empty workbook. 104: objXL.workbooks.Add 105: 106: ' Put the pointer on the A1 cell. 107: objXL.ActiveSheet.range("A1").Activate ...: ...: 112: ' ---------------------------------------------------------------- 113: ' Start at the higher object level, the Exchange server itself. 114: 115: Set objWMIExchangeServers = _ 116: GetObject("winmgmts:{impersonationLevel=impersonate}!//" & _ 117: cComputerName & _ 118: cWMINameSpace).InstancesOf(cWMIInstance) 119: 120: EnumExchangeServers objWMIExchangeServers 121: 122: ' Close the workbook. This will prompt the user to choose 123: ' where to save the XLS. 124: objXL.workbooks.close 125: objXL.Quit 126: WScript.DisconnectObject objXL 127: Set objXL = Nothing 128: 129: WScript.Quit (0) ...: ...: ...: Sample 6 must be run on an Exchange 2000 server with Excel 2000 installed. The script uses Excel to review and display the collected data. For the sake of simplicity, you should run the script in an Exchange 2000 test environment (this can be a single server). In a real business situation, the script would retrieve too much data. The script begins (lines 11 to 26) with various functions. These functions are explained in the following pages of this document. Each has a specific purpose related to the COM technology that is used. Note By default, the script uses CDOEX to access user information. As explained in the "How Is CDOEXM Related to ASDI and CDOEX?" section on page 7, ADSI can be used instead of CDOEX. This is why lines 16 and 17 are commented out. These lines show how to use ADSI to retrieve user information. Lines 32 to 35 define some constants to limit the amount of information. Each of these contains a Boolean value. cEnumMaiboxTree: When True, the script will explore the mailbox folder hierarchy. To minimize the amount of data collected, and for security reasons, the only mailbox accessed is the mailbox of the user who is currently logged on. cEnumFolderTree: When True, the script will explore the entire public folder hierarchy. Set this constant to True with caution; if you have a large number of public folders it can take a lot of resources. Again, it's strongly recommended that you run Sample 6 in a test environment. cEnumFieldsCollection: When True, the script will enumerate the CDO Fields collection associated with the CDOEX object. This constant is used in every function retrieving an ADO Fields collection from an object. cEnumMessageContent: When cEnumMaiboxTree or cEnumFolderTree is True and a folder contains messages, cEnumMessageContent examines the message properties (header, body parts, any attachments, and so on). Lines 38 to 40 of Sample 6 define constants from Sample 1. In lines 69 to 92, the script retrieves information about the domain name and the computer name. Lines 97 to 107 initialize the Excel 2000 COM objects to load the retrieved data into a new Excel spreadsheet. After the basic steps are complete, the script explores the Exchange 2000 server using the different COM technologies. In lines 115 to 118, the script instantiates the ExchangeServerState WMI class. The returned result is a collection of Exchange 2000 servers in the organization. This collection is passed to the EnumExchangeServers function (line 120). This is where the Exchange 2000 COM exploration with scripts really begins. Getting Information About the Exchange 2000 Server with CDOEXM The following sections explain how to retrieve information about servers, storage groups, and mailbox stores using CDOEXM. Retrieving More Exchange 2000 Server Information With CDOEXM This section refers to points 1, 2, A, and B in Figure 3. In lines 140 to 177 of Sample 7 below, the EnumExchangeServers function uses the same Windows Management Instrumentation (WMI) information retrieved at line 120 in Sample 6. What's interesting here is the way the information is displayed. The DisplayText function loads the information into an Excel spreadsheet (lines 140 to 177). Note that CDO for Exchange Management (CDOEXM) is invoked at lines 137 and 181–209. Sample 7 Getting more Exchange 2000 server information with CDOEXM ...: ...: 132:Private Function EnumExchangeServers (objWMIExchangeServers) ...: ...: 137: Set objCDOEXMExchangeServer = CreateObject("CDOEXM.ExchangeServer") 138: 139: For each objWMIExchangeServer in objWMIExchangeServers 140: DisplayText "Name(WMI)", objWMIExchangeServer.Name ...: ...: 176: DisplayText "ServicesState(WMI)", _ 177: objWMIExchangeServer.ServicesState 178: 179: ' Switch to CDOEXM to get CDOEXM information 180: ' by using the WMI server name. 181: objCDOEXMExchangeServer.DataSource.Open(objWMIExchangeServer.Name) ...: ...: 188: DisplayText "ExchangeVersion(CDOEXM)", _ 189: objCDOEXMExchangeServer.ExchangeVersion 190: DisplayText "SubjectLoggingEnabled(CDOEXM)", _ 191: objCDOEXMExchangeServer.SubjectLoggingEnabled 192: DisplayText "MessageTrackingEnabled(CDOEXM)", _ 193: objCDOEXMExchangeServer.MessageTrackingEnabled 194: DisplayText "DaysBeforeLogFileRemoval(CDOEXM)", _ 195: objCDOEXMExchangeServer.DaysBeforeLogFileRemoval 196: DisplayText "Name(CDOEXM)", objCDOEXMExchangeServer.Name 197: DisplayText "ServerType(CDOEXM)", _ 198: objCDOEXMExchangeServer.ServerType 199: DisplayText "DirectoryServer(CDOEXM)", _ 200: objCDOEXMExchangeServer.DirectoryServer 201: 202: If cEnumFieldsCollection Then _ 203: EnumFieldsCollection objCDOEXMExchangeServer.Fields 204: 205: ' Enumerate the Storage Group of the current server 206: ' (where the script is running). 207: If strCurrentComputerName = objCDOEXMExchangeServer.Name Then 208: EnumStorageGroups objCDOEXMExchangeServer 209: End If ...: ...: 213: Next ...: ...: 218:End Function CDOEXM comes with a set of new objects and interfaces related to the Exchange 2000 server. The object used in Sample 7 is the CDOEXM.ExchangeServer object instantiated at line 137. With this object, the script can retrieve CDOEXM information by opening a data source with the Exchange server name retrieved from WMI (line 181). Table 1 The CDOEXM object for the Exchange 2000 server Lines 202 and 203 retrieve the ActiveX Data Objects (ADO) Fields collection associated with the Exchange 2000 server. See Sample 24 for the EnumFieldsCollection function. Management Point 1 Retrieving server information from WMI and CDOEXM At this stage of the script: You have a complete list of the Exchange 2000 servers in the organization, with some details (version, server type, and so on). You know the status of each Exchange 2000 server in the organization. You can read and configure logging and message tracking for each Exchange server in the organization. You know whether each server is a front-end (FE) or back-end (BE) server. You have a pointer to get the configuration of the storage groups for each Exchange server. See points 1, 2, A, and B in Figure 3 to locate this WMI and CDOEXM operation in the Exchange 2000 logical view. Retrieving Exchange 2000 Storage Group Information with CDOEXM This section refers to points 3 and C in Figure 3. Line 208 of Sample 7 calls the EnumStorageGroups function with the CDOEXM.ExchangeServer object as parameter. This function (see Sample 8) retrieves the list of storage groups configured on this server. Note the condition at lines 207–209 in Sample 7. The "If...Then" structure restricts the exploration to the local Exchange server because the Exchange OLE DB provider (used later in Sample 21) does not allow any remote connection. This also helps minimize the data collected. Sample 8 uses the CDOEXM.StorageGroup object, which is instantiated at line 226. Line 228 defines a loop through the storage groups in the collection. Inside the loop, a data source is opened (line 232) based on the distinguished name retrieved from the collection. The script extracts the values of properties in each group (lines 239 to 245). Sample 8 Retrieving storage group information with CDOEXM 221:Private Function EnumStorageGroups (objCDOEXMExchangeServer) ...: ...: 226: Set objStorageGroup = CreateObject("CDOEXM.StorageGroup") 227: 228: For Each urlStorageGroup In objCDOEXMExchangeServer.StorageGroups ...: 232: objStorageGroup.DataSource.Open (urlStorageGroup) ...: ...: 239: DisplayText "Storage Group Name(CDOEXM)", objStorageGroup.Name 240: DisplayText "LogFilePath(CDOEXM)", objStorageGroup.LogFilePath 241: DisplayText "SystemFilePath(CDOEXM)", _ 242: objStorageGroup.SystemFilePath 243: DisplayText "CircularLogging(CDOEXM)", _ 244: objStorageGroup.CircularLogging 245: DisplayText "ZeroDatabase(CDOEXM)", objStorageGroup.ZeroDatabase 246: 247: ' At this level of the object hierarchy, you can move the log 248: ' and system files of the storage group to another location. 249: ' objStorageGroup.MoveLogFiles <New File System Path> 250: ' objStorageGroup.MoveSystemFiles <New File System Path> 251: 252: If cEnumFieldsCollection Then _ 253: EnumFieldsCollection objStorageGroup.Fields 254: 255: EnumMailboxStoreDBs objStorageGroup 256: 257: EnumPublicStoreDBs objStorageGroup ...: ...: 261: Next ...: ...: 266:End Function Note Lines 249 and 250 of Sample 8 are commented out. They provide information only. The script does not move logs and system files when executed. Table 2 The CDOEXM.StorageGroup object Lines 252 and 253 retrieve the ADO Fields collection associated with an Exchange 2000 server. See Sample 24 for the EnumFieldsCollection function. Management Point 2 Retrieving storage group information with CDOEXM At this stage of the script: You know the storage group configuration. That is, you know the locations of the log and system files, and you can move them. You can enable and disable circular logging in each storage group. You can enable and disable the zeroDatabase feature. You have a pointer to get the configuration of all mailboxes and public stores in the storage group. See points 3 and C in Figure 3 to locate this CDOEXM operation in the Exchange 2000 logical view. Retrieving Exchange 2000 Mailbox Store Information with CDOEXM This section refers to points 4 and D in Figure 3. Line 255 of Sample 8 calls the EnumMailboxStoreDBs function with the CDOEXM.StorageGroup object as a parameter. The EnumMailboxStoreDBs function retrieves the list of mailbox stores in the storage group. In Sample 9, the list of mailbox stores is retrieved from a collection (line 276). Inside the "For Each" loop, a data source is opened (line 280) based on the distinguished name retrieved from the collection. The script extracts each mailbox store's property values (lines 287 to 303). Sample 9 Retrieving mailbox store information with CDOEXM 269:Private Function EnumMailboxStoreDBs (objStorageGroup) ...: ...: 274: Set objMailboxStoreDB = CreateObject("CDOEXM.MailBoxStoreDB") 275: 276: For Each urlMailboxStoreDB In objStorageGroup.MailboxStoreDBs ...: ...: 280: objMailboxStoreDB.DataSource.Open (urlMailboxStoreDB) ...: ...: 287: DisplayText "Name(CDOEXM)", objMailboxStoreDB.Name 288: DisplayText "DaysBeforeGarbageCollection", _ 289: objMailboxStoreDB.DaysBeforeGarbageCollection 290: DisplayText "DaysBeforeDeletedMailboxCleanup", _ 291: objMailboxStoreDB.DaysBeforeDeletedMailboxCleanup 292: DisplayText "GarbageCollectOnlyAfterBackup", _ 293: objMailboxStoreDB.GarbageCollectOnlyAfterBackup 294: DisplayText "DBPath", objMailboxStoreDB.DBPath 295: DisplayText "SLVPath", objMailboxStoreDB.SLVPath 296: DisplayText "PublicStoreDB", objMailboxStoreDB.PublicStoreDB 297: DisplayText "OfflineAddressList", _ 298: objMailboxStoreDB.OfflineAddressList 299: DisplayText "Status", objMailboxStoreDB.Status 300: DisplayText "Enabled", objMailboxStoreDB.Enabled 301: DisplayText "StoreQuota", objMailboxStoreDB.StoreQuota 302: DisplayText "OverQuotaLimit", objMailboxStoreDB.OverQuotaLimit 303: DisplayText "HardLimit", objMailboxStoreDB.HardLimit 304: 305: ' At this level of the object hierarchy, 306: ' you can mount/dismount the mailbox store. 307: ' objMailboxStoreDB.Dismount 308: ' objMailboxStoreDB.Mount 309: 310: ' At this level of the object hierarchy, 311: ' you can move the mailbox store 312: ' to another location. 313: ' objMailboxStoreDB.MoveDataFiles <New Data File Path> 314: 315: If cEnumFieldsCollection Then _ 316: EnumFieldsCollection objMailboxStoreDB.Fields 317: 318: EnumMailboxStore urlMailboxStoreDB ...: ...: 322: Next ...: ...: 327:End Function Note Lines 307, 308, and 313 are commented out. They provide information only. The script does not mount, dismount, or move database files when executed. Table 3 The CDOEXM object for the mailbox store Lines 315 and 316 retrieve the ADO Fields collection associated with the Exchange 2000 server. See Sample 24 for the EnumFieldsCollection function. Management Point 3 Retrieving mailbox store information with CDOEXM At this stage of the script: You know the database and stream file locations of each mailbox store in the storage group. You know the distinguished name of the default public store for the mailbox store. You have the offline address list for the mailbox store. You know whether the mailbox store is mounted or dismounted, and its default state at the Exchange startup. You have a set of information about mailbox store quotas. You have a set of information about the mailbox store cleanup process. You can mount and dismount the mailbox store and move its data files to another location. See points 4 and D in Figure 3 to locate this operation in the Exchange 2000 logical view. Getting Information About Mailboxes The previous sections explored the Exchange 2000 server, its storage groups, and stores using CDOEXM. The next sections explain how to get detailed information about mailboxes. Retrieving Mailboxes with ADSI This section refers to points 5, 6, E, and F in Figure 3. Line 318 of Sample 9 calls the EnumMailboxStore function with the distinguished name of the mailbox store as a parameter. The first part of the EnumMailboxStore function (see Sample 10) retrieves the list of the mailboxes in the mailbox store (lines 31 to 35). The list of mailboxes is retrieved from a query based on ADO and performed by the ADSearch function. For this query, ADO uses the ADSI OLE DB provider (see Figure 2). The ADSI OLE DB provider uses the IDirectorySearch interface and an LDAP query to get the list of mailboxes. For the purposes of this white paper, the most significant detail is that the search operation is executed by ADSI based on an ADO query. This is why steps 5 and E in Figure 3 include ADO. Note This document does not focus on ADSI. See the Microsoft MSDN Web site or the Compaq Active Answers Web site at for more information about ADSI search operations. The ADSearch function is discussed in more detail in "Querying Active Directory with ADSI and ADO" on page 111 of the Appendix. Note Addresses of third-party Web sites referenced in this white paper are provided to help you find the information you need. This information is subject to change without notice. Microsoft in no way guarantees the accuracy of this third-party information. The query selection is based on the homeMDB attribute initialized for any mailbox-enabled object in Microsoft® Active Directory®. The homeMDB attribute (line 32) is simply the distinguished name of the mailbox store housing the mailbox. The search retrieves a collection of ADsPath objects stored in a Dictionary object (line 31). The Dictionary object is part of the Scripting Run-Time Library. The Dictionary object stores heterogeneous data that can be accessed using an ordinal position or a string used as key. For more information, see the MSDN Library at. The ADSearch function can retrieve other attributes besides the homeMDB attribute. It is also used in the "Moving Exchange 2000 Mailboxes Based on an LDAP Query" script in the second part of this document. Note The mailbox store object in Active Directory contains the homeMDBBL attribute. This multi-valued attribute contains a list of the distinguished name attributes of users with mailboxes in the mailbox store. The LDAP search operation is more efficient than homeMDBBL because the distinguishedName property is rarely the only information you want. If the homeMDBBL property is used, you have to make a bind operation with each distinguished name to get more information about the object itself. Management Point 4 Retrieving mailboxes from a mailbox store with ADO/ADSI At this stage of the script: You know the distinguished name of a mailbox store. You have a distinguished name list of the mailbox-enabled Active Directory users in the mailbox store. See points 5, 6, E, and F in Figure 3 to locate this operation in the Exchange 2000 logical view. Retrieving Mailbox-Enabled Object Properties with ADSI This section refers to point 7 in Figure 3. Line 54 of Sample 10 begins enumerating the distinguished names retrieved by the ADSearch function. For each loop, the script binds to an Active Directory mailbox-enabled object (lines 56 and 57). It retrieves the list of properties associated with the mailbox-enabled object (lines 60 to 177). Note that at lines 60 and 61 the script obtains the mailNickName and the displayName property values and stores them in variables for later use. Note In this case, it is mandatory to make a bind operation because the script uses the IMailboxStore aggregated interface (see Sample 11, lines 200 to 212) to retrieve the mailbox properties associated with the ADSI.User object. If the objective is not to retrieve the mailbox properties, the bind operation is not required and the LDAP search operation can retrieve the ADSI.User properties. This makes the code more efficient. See the prior note about the homeMDBBL property. To make the script more readable, some parts of the code that retrieves the ADSI.User properties have been removed from Sample 10. Accessing Active Directory from ADSI means using the LDAP namespace. Some ADSI.User object properties are not supported in the LDAP namespace (lines 80, 101, 104, 111, 132, 145, 150, and 163). These properties are related to user logon management and are part of the Group Policy Object (GPO) in Windows 2000. These properties are supported in Microsoft® Windows NT® 4.0 with the ADSI WinNT namespace. For more information about ADSI, see the Microsoft Platform SDK at . Sample 10 Retrieving a mailbox and its associated mailbox-enabled object properties from ADSI 1:' VBScript function to enumerate the mailboxes in a mailbox store from 2:' an ADSI user object class. .: 9:Option Explicit 10: 11:' ------------------------------------------------------------------------ 12:Function EnumMailboxStore (urlMailboxStoreDB) ..: ..: 26: ' -------------------------------------------------------------------- 27: WScript.Echo Space (intX + 1) & _ 28: "Retrieving User ..: ..: 54: For intIndice = 1 to (objResultList.Count - 1) 55: 56: Set objUser = GetObject (objResult (intIndice)) 57: objUser.GetInfo 58: 59: ' Getting two properties for display facilities. 60: strAlias = objUser.Get ("mailNickName") 61: strDisplayName = objUser.Get ("displayName") ..: ..: 76: strTemp = "" : strTemp = objUser.AccountDisabled 77: DisplayText "AccountDisabled(ADSI)", strTemp 78: strTemp = "" : strTemp = objUser.AccountExpirationDate 79: DisplayText "AccountExpirationDate(ADSI)", strTemp 80: ' objUser.BadLoginAddress 81: strTemp = "" : strTemp = "(Not supported in LDAP namespace)" 82: DisplayText "BadLoginAddress(ADSI)", strTemp 83: strTemp = "" : strTemp = objUser.BadLoginCount 84: DisplayText "BadLoginCount(ADSI)", strTemp 85: strTemp = "" : strTemp = objUser.Department 86: DisplayText "Department(ADSI)", strTemp ..: ..: 98: DisplayText "FirstName(ADSI)", strTemp 99: strTemp = "" : strTemp = objUser.FullName 100: DisplayText "FullName(ADSI)", strTemp 101: ' objUser.GraceLoginsAllowed 102: strTemp = "" : strTemp = "(Not supported in LDAP namespace)" 103: DisplayText "GraceLoginsAllowed(ADSI)", strTemp 104: ' objUser.GraceLoginsRemaining 105: strTemp = "" : strTemp = "(Not supported in LDAP namespace)" 106: DisplayText "GraceLoginsRemaining(ADSI)", strTemp 107: strTemp = "" : strTemp = objUser.HomeDirectory 108: DisplayText "HomeDirectory(ADSI)", strTemp 109: strTemp = "" : strTemp = objUser.HomePage 110: DisplayText "HomePage(ADSI)", strTemp 111: ' objUser.IsAccountLocked 112: strTemp = "" : strTemp = "(Not supported in LDAP namespace)" 113: DisplayText "IsAccountLocked(ADSI)", strTemp 114: strTemp = "" : strTemp = objUser.Languages 115: DisplayText "Languages(ADSI)", strTemp ...: ...: 126: strTemp = "" : strTemp = objUser.LoginScript 127: DisplayText "LoginScript(ADSI)", strTemp 128: strTemp = "" : strTemp = objUser.LoginWorkstations 129: DisplayText "LoginWorkstations(ADSI)", strTemp 130: strTemp = "" : strTemp = objUser.Manager 131: DisplayText "Manager(ADSI)", strTemp 132: ' objUser.MaxLogins 133: strTemp = "" : strTemp = "(Not supported in LDAP namespace)" 134: DisplayText "MaxLogins(ADSI)", strTemp ...: ...: 143: strTemp = "" : strTemp = objUser.OtherName 144: DisplayText "OtherName(ADSI)", strTemp 145: ' objUser.PasswordExpirationDate 146: strTemp = "" : strTemp = "(Not supported in LDAP namespace)" 147: DisplayText "PasswordExpirationDate(ADSI)", strTemp 148: strTemp = "" : strTemp = objUser.PasswordLastChanged 149: DisplayText "PasswordLastChanged(ADSI)", strTemp 150: ' objUser.PasswordMinimumLength 151: strTemp = "" : strTemp = "(Not supported in LDAP namespace)" 152: DisplayText "PasswordMinimumLength(ADSI)", strTemp 153: strTemp = "" : strTemp = objUser.PasswordRequired ...: ...: 161: strTemp = "" : strTemp = objUser.Profile 162: DisplayText "Profile(ADSI)", strTemp 163: ' objUser.RequireUniquePassword 164: strTemp = "" : strTemp = "(Not supported in LDAP namespace)" 165: DisplayText "RequireUniquePassword(ADSI)", strTemp 166: strTemp = "" : strTemp = objUser.SeeAlso 167: DisplayText "SeeAlso(ADSI)", strTemp 168: strTemp = "" : strTemp = objUser.TelephoneHome ...: ...: 174: strTemp = "" : strTemp = objUser.TelephonePager 175: DisplayText "TelephonePager(ADSI)", strTemp 176: strTemp = "" : strTemp = objUser.Title 177: DisplayText "Title(ADSI)", strTemp 178: 179: ' At this level of the object hierarchy, you can determine 180: ' the group membership, and set and change the password 181: ' of a user object. 182: ' objUser.Groups 183: ' objUser.SetPassword 184: ' objUser.ChangePassword ...: ...: The comments on lines 179 to 184 refer to information that can be retrieved about mailbox-enabled objects. Because a mailbox-enabled object is a user object class in Active Directory, you can determine and change which group it belongs to (line 182). In addition, passwords can be set (line 183) and changed (line 184) at this level of the object hierarchy. Note that these features are purely ADSI related. After Active Directory mailbox-enabled object properties are retrieved, the script uses CDOEXM to retrieve their associated mailbox properties (see Sample 11, lines 200 to 224). At this point CDOEXM acts as an extension of ADSI by providing the IMailboxStore interface to the ADSI.User object. The IMailboxStore interface is an aggregated interface to the ADSI.User object class. For more information about the ADSI.User object class, see. Management Point 5 Retrieving mailbox-enabled object properties with ADSI At this stage of the script: You have the property list of the ADSI.User mailbox-enabled object. You can get and change the ADSI.User group membership. You can change or reset the ADSI.User password. You have an interface to retrieve mailbox configuration information with the help of the IMailboxStore interface of CDOEXM. See point 7 in Figure 3 to locate this operation in the Exchange 2000 logical view. Retrieving Mailbox Properties with CDOEXM from ADSI This section refers to points 8 and 9 in Figure 3. When the Exchange System Manager (ESM) is installed on a Windows 2000 computer, CDOEXM is registered under Windows 2000 and installed as an extension of Active Directory Service Interfaces (ADSI). A set of mailbox properties and methods from the mailbox-enabled ADSI object is retrieved through CDOEXM at lines 200 through 217 of Sample 11. The mailbox-enabled object is instantiated at line 56 in Sample 10. Sample 11 Retrieving mailbox properties with CDOEXM from ADSI ...: 196: DisplayText "Data retrieved via _ 'IMailboxStore CDOEXM' interface", "" ...: 200: DisplayText "DaysBeforeGarbageCollection(CDOEXM)", _ 201: objUser.DaysBeforeGarbageCollection 202: DisplayText "EnableStoreDefaults(CDOEXM)", _ 203: objUser.EnableStoreDefaults 204: DisplayText "GarbageCollectOnlyAfterBackup(CDOEXM)", _ 205: objUser.GarbageCollectOnlyAfterBackup 206: DisplayText "Hardlimit(CDOEXM)", objUser.Hardlimit 207: DisplayText "HomeMDB(CDOEXM)", objUser.HomeMDB 208: DisplayText "OverQuotaLimit(CDOEXM)", objUser.OverQuotaLimit 209: DisplayText "OverrideStoreGarbageCollection(CDOEXM)", _ 210: objUser.OverrideStoreGarbageCollection 211: DisplayText "RecipientLimit(CDOEXM)", objUser.RecipientLimit 212: DisplayText "StoreQuota(CDOEXM)", objUser.StoreQuota ...: 215: For Each strDelegate In objUser.Delegates 216: DisplayText "Delegate(CDOEXM)", strDelegate 217: Next ...: 220: ' At this level of the object hierarchy, you can create, move, 221: ' or delete a mailbox from the mailbox store. 222: ' objUser.CreateMailBox 223: ' objUser.DeleteMailBox 224: ' objUser.MoveMailBox ...: ...: 249:End Function Note Lines 222, 223, and 224 are commented out. They provide information only. The script does not create, delete, or move mailboxes when executed. Table 4 The CDOEXM interface for mailbox management Management Point 6 Retrieving mailbox properties with CDOEXM from ADSI At this stage of the script: You know the configuration of the user mailbox (quota, mailbox cleanup process, mailbox restrictions). You can review and create a list of users with delegated access to the mailbox. You can create, move, and delete mailboxes from the ADSI.User object. See points 8 and 9 in Figure 3 to locate this operation in the Exchange 2000 logical view. Retrieving Mailbox-Enabled Object Properties with CDOEX This section refers to points G and H in Figure 3. The EnumMailboxStore function in Sample 12 is the same as the EnumMailboxStore in Sample 10, with one exception. The Sample 12 version retrieves the mailbox-enabled object properties from a CDO.Person object. The Sample 10 version retrieves the mailbox-enabled object properties from an ADSI.User object. The purpose of this section is to demonstrate how these two different COM technologies are tied together in Exchange 2000, and to compare the kinds of information each provides. In Sample 6, lines 14 and 15 can be substituted with lines 16 and 17. If lines 16 and 17 are used, the script uses CDOEX instead of ADSI. In both cases, the CDOEXM.MailboxStore interface accesses the mailbox configuration settings. Lines 30 to 38 of Sample 12 are the same as lines 30 to 38 of Sample 10. In both cases, an ADSI query retrieves the list of mailboxes. The difference is how the mailbox-enabled object information is retrieved from Active Directory. In Sample 10, an ADSI.User object is instantiated at line 56 in the "For Each" loop. In Sample 12, a CDO.Person object is instantiated at line 52. Then the IDataSource interface is opened to retrieve the user information (line 56). Note that the information used to access user data is the same. In line 31, both script samples use the ADsPath returned by the query result. Behind the scenes, the CDO.Person object uses ADSI. This is why this operation is coupled with ADSI at step G in Figure 3. Sample 10 retrieves the mailNickName and the displayName attributes in the same way. Unlike Sample 12, however, Sample 10 uses a Fields collection in lines 59 and 60. The CDO.Person object does not expose the mailNickName and the displayName attributes. CDOEX is linked to ADO in the Exchange 2000 architecture, and ADO exposes a Fields collection (defined in the Exchange store) containing various properties related to the opened data source. Therefore, you can use this method from CDOEX to retrieve LDAP properties contained in the Fields collection. With the help of the Exchange store architecture and its OLE DB provider, ADO extends the number of properties available from a CDOEX object. The main difference between the properties retrieved from the CDOEX object itself and equivalent properties from the Fields collection is that the ADO Fields collection does not contain any business logic. This is why you should use the CDOEX properties to get or set values whenever possible. Only use the ADO Fields collection when the property is not available from the CDOEX object (as is the case for mailNickName and displayName). Sample 12 Retrieving a mailbox and its associated mailbox-enabled object properties from CDOEX 1:' VBScript function to enumerate the mailboxes in a mailbox store from 2:' a CDO.Person object class. .: 9:Option Explicit 10: 11:' ------------------------------------------------------------------------ 12:Function EnumMailboxStore (urlMailboxStoreDB) ..: ..: 26: ' ---------------------------------------------------------------- 27: WScript.Echo Space (intX + 1) & _ 28: "Retrieving Person ..: ..: 52: Set objPerson = CreateObject ("CDO.Person") 53: 54: For intIndice = 1 to (objResultList.Count - 1) 55: 56: objPerson.DataSource.Open objResult (intIndice) 57: 58: ' Getting two properties for display facilities. 59: strAlias = objPerson.Fields("mailNickName") 60: strDisplayName = objPerson.Fields("displayName") ..: ..: 71: DisplayText "Company(CDOEX)", objPerson.Company 72: DisplayText "Email(CDOEX)", objPerson.Email 73: DisplayText "Email2(CDOEX)", objPerson.Email2 74: DisplayText "Email3(CDOEX)", objPerson.Email3 75: DisplayText "EmailAddresses(CDOEX)", objPerson.EmailAddresses 76: DisplayText "FileAs(CDOEX)", objPerson.FileAs 77: DisplayText "FileAsMapping(CDOEX)", objPerson.FileAsMapping 78: DisplayText "FirstName(CDOEX)", objPerson.FirstName 79: DisplayText "HomeCity(CDOEX)", objPerson.HomeCity 80: DisplayText "HomeCountry(CDOEX)", objPerson.HomeCountry 81: DisplayText "HomeFax(CDOEX)", objPerson.HomeFax 82: DisplayText "HomePhone(CDOEX)", objPerson.HomePhone 83: DisplayText "HomePostalAddress(CDOEX)", _ objPerson.HomePostalAddress 84: DisplayText "HomePostalCode(CDOEX)", _ objPerson.HomePostalCode 85: DisplayText "HomePostOfficeBox(CDOEX)", _ objPerson.HomePostOfficeBox 86: DisplayText "HomeState(CDOEX)", objPerson.HomeState 87: DisplayText "HomeStreet(CDOEX)", objPerson.HomeStreet 88: DisplayText "Initials(CDOEX)", objPerson.Initials 89: DisplayText "LastName(CDOEX)", objPerson.LastName 90: DisplayText "MailingAddress(CDOEX)", objPerson.MailingAddress 91: DisplayText "MailingAddressID(CDOEX)", objPerson.MailingAddressID 92: DisplayText "MiddleName(CDOEX)", objPerson.MiddleName 93: DisplayText "MobilePhone(CDOEX)", objPerson.MobilePhone 94: DisplayText "NamePrefix(CDOEX)", objPerson.NamePrefix 95: DisplayText "NameSuffix(CDOEX)", objPerson.NameSuffix 96: DisplayText "Title(CDOEX)", objPerson.Title 97: DisplayText "WorkCity(CDOEX)", objPerson.WorkCity 98: DisplayText "WorkCountry(CDOEX)", objPerson.WorkCountry 99: DisplayText "WorkFax(CDOEX)", objPerson.WorkFax 100: DisplayText "WorkPager(CDOEX)", objPerson.WorkPager 101: DisplayText "WorkPhone(CDOEX)", objPerson.WorkPhone 102: DisplayText "WorkPostalAddress(CDOEX)", _ objPerson.WorkPostalAddress 103: DisplayText "WorkPostalCode(CDOEX)", objPerson.WorkPostalCode 104: DisplayText "WorkPostOfficeBox(CDOEX)", _ objPerson.WorkPostOfficeBox 105: DisplayText "WorkState(CDOEX)", objPerson.WorkState 106: DisplayText "WorkStreet(CDOEX)", objPerson.WorkStreet 107: 108: ' At this level of the object hierarchy, you can obtain 109: ' a stream containing the CDO.Person Vcard. 110: ' objPerson.GetVCardStream. 111: 112: If cEnumFieldsCollection Then _ 113: EnumFieldsCollection objPerson.Fields ...: ...: After the data source is opened (line 56), the script enumerates all the properties associated with the CDO.Person object (lines 71 to 106). For more information about the CDO.Person object, see the Exchange SDK at. Lines 112 and 113 retrieve the ADO Fields collection associated with an Exchange 2000 server. See Sample 24 for the EnumFieldsCollection function. Management Point 7 Retrieving mailbox-enabled object properties with CDOEX At this stage of the script: You have the property list of the mailbox-enabled object. You can get access to the vCard stream of the CDO.Person object. See points G and H in Figure 3 to locate this operation in the Exchange 2000 logical view. Retrieving Mailbox Properties with CDOEXM from CDOEX This section refers to points I and J in Figure 3. To enumerate mailbox properties, the script must first invoke the GetInterface method of the CDO.Person object (see Sample 13, lines 127 to 139). The GetInterface method is a generic interface navigation aid for scripting languages that do not support such navigation directly. Which interface names are valid to pass to GetInterface depends on the specific implementation. The IMailboxStore interface is aggregated by two objects: CDO.Person ADSI.User For example, at line 56 of Sample 12, the CDO.Person object is used as follows: ObjPerson.DataSource.Open objResult (intIndice) This is possible because the IDataSource interface is directly accessible from the CDO.Person object. You can get the same results with the following code: objDataSource = ObjPerson.GetInterface("IDataSource") objDataSource.Open objResult (intIndice) This method is less efficient. To access the mailbox settings (this is done with ADSI in lines 200 to 217 of Sample 11), the script in Sample 13 "connects" the CDO.Person object to the IMailboxStore interface. This is necessary because the IMailboxStore interface is not directly accessible from the CDO.Person object. Line 125 makes the connection: objMailbox = ObjPerson.GetInterface("IMailboxStore ") The steps from lines 127 to 144 in Sample 13 (CDOEX) are exactly the same as lines 200 to 217 in Sample 11 (ADSI). Only the base object is different. Sample 13 Retrieving mailbox properties with CDOEXM from CDOEX ...: ...: 125: Set objMailbox = objPerson.GetInterface ("IMailboxStore ") 126: 127: DisplayText "DaysBeforeGarbageCollection(CDOEXM)", _ 128: objMailbox.DaysBeforeGarbageCollection 129: DisplayText "EnableStoreDefaults(CDOEXM)", _ 130: objMailbox.EnableStoreDefaults 131: DisplayText "GarbageCollectOnlyAfterBackup(CDOEXM)", _ 132: objMailbox.GarbageCollectOnlyAfterBackup 133: DisplayText "Hardlimit(CDOEXM)", objMailbox.Hardlimit 134: DisplayText "HomeMDB(CDOEXM)", objMailbox.HomeMDB 135: DisplayText "OverQuotaLimit(CDOEXM)", objMailbox.OverQuotaLimit 136: DisplayText "OverrideStoreGarbageCollection(CDOEXM)", _ 137: objMailbox.OverrideStoreGarbageCollection 138: DisplayText "RecipientLimit(CDOEXM)", objMailbox.RecipientLimit 139: DisplayText "StoreQuota(CDOEXM)", objMailbox.StoreQuota ...: ...: 142: For Each strDelegate In objMailbox.Delegates 143: DisplayText "Delegate(CDOEXM)", strDelegate 144: Next ...: 147: ' At this level of the object hierarchy, you can create, 148: ' move, or delete a mailbox from the mailbox store. 149: ' objMailbox.CreateMailBox 150: ' objMailbox.DeleteMailBox 151: ' objMailbox.MoveMailBox ...: ...: 179:End Function Management Point 8 Retrieving the mailbox properties with CDOEXM from CDOEX At this stage of the script: You know the configuration of the mailbox-enabled Active Directory object (quota, mailbox cleanup process, mailbox restrictions). You can review and create a list of users with delegated access to this mailbox. You can create, move, or delete a mailbox from the CDO.Person object. See points I and J in Figure 3 to locate this operation in the Exchange 2000 logical view. Retrieving Mail-Enabled Object Properties with CDOEXM from ADSI or CDOEX This section refers to points 7, 8, G, and H in Figure 3. The list returned by the LDAP query contains mailbox-enabled recipients because the script accesses the homeMDB attribute of the mailbox store (see Sample 10 and Sample 12, lines 31 to 35). At this level of the object hierarchy, it is also possible to retrieve the mail-enabled recipient properties if the query returns a mail-enabled recipient list. The purpose of the EnumAllInXL.wsf script is to decompose the Exchange 2000 COM object hierarchy. Decomposing a mail-enabled object will stop the exploration at this level because a mail-enabled object does not have a mailbox. To access the mail-enabled object's properties, the script must use the IMailRecipient interface instead of IMailboxStore. Table 5 The CDOEXM interface for mail-enabled recipients management The following are two scripts that create a mail-enabled Active Directory user with the IMailRecipient interface. Sample 14 creates a mail-enabled user from an ADSI object. Sample 15 creates a mail-enabled user from a CDOEX object. Sample 14 Creating a mail-enabled object with CDOEXM from ADSI 1:' Script creating a user and enabling their mail address using the CDOEXM 2:' IMailRecipient interface associated with the ADSI userContainer = GetObject("LDAP://" & strComputerName & _ 38: "/" & "CN=users," & strDefaultDomainNC) 39: 40:Set objUser = objContainer.Create ("user", _ 41: "cn=" & UCase (cLastName) & _ 42: " " & cFirstName) 43:objUser.Put "samAccountName", cUserID 44:objUser.SetInfo 45: 46:objUser.Put "sn", cLastName 47:objUser.Put "givenName", cFirstName 48:objUser.Put "userPrincipalName", cEmailName 49:objUser.Put "displayName", UCase (cLastName) & " " & cFirstName 50: 51:objUser.AccountDisabled = False 52:objUser.SetInfo 53: 54:objUser.SetPassword "password" 55: 56:objUser.MailEnable "SMTP:" & cEmailName & "@Example.Com" 57:objUser.SetInfo ..: ..: ..: To access the IMailRecipient interface, Sample 15 uses the same GetInterface method (line 38) used in Sample 13 (line 125). The logic and the reasons are exactly the same. The IMailRecipient interface is aggregated by the following objects: CDO.Person CDO.Folder ADSI.Contact ADSI.Group ADSI.User The IMailRecipient interface is used in Sample 22 to mail-enable an Exchange store folder. Sample 15 Creating a mail-enabled object with CDOEXM from CDOEX 1:' Script creating a user and enabling their mail address using the CDOEXM 2:' IMailRecipient interface associated with the CDO personPerson = CreateObject ("CDO.Person") 38:Set objMailRecipient = objPerson.GetInterface ("IMailRecipient") 39: 40:objPerson.Fields("samAccountName") = cUserID 41:objPerson.LastName = cLastName 42:objPerson.FirstName = cFirstName 43:objPerson.Fields("userPrincipalName") = cEmailName 44:objPerson.Fields("displayName") = UCase (cLastName) & " " & cFirstName 45:objPerson.Fields("userAccountControl") = 512 46:objPerson.Fields("userPassword") = "password" 47:objPerson.Fields.Update 48:objPerson.DataSource.SaveTo "LDAP://" & strComputerName & "/" & _ 49: "cn=" & cEmailName & "," & _ 50: "cn=users," & strDefaultDomainNC 51: 52:objMailRecipient.MailEnable "SMTP:" & cEmailName & "@Example.Com" 53: 54:objPerson.DataSource.Save ..: ..: Management Point 9 Retrieving the mail-enabled object properties with CDOEXM from ADSI or CDOEX At this stage of the script: You know the configuration of the mail-enabled object. You know the e-mail address of the recipient. You can enable or disable mail-enabled recipients from the CDO.Person object or the ADSI.User object. See points 8, 9, I, and J in Figure 3 to locate this operation in the Exchange 2000 logical view. So far, the script has retrieved the list of mailboxes in the mailbox store and the associated configurations for each. The next step is to look at the mailbox content (various data object types, such as messages). Before decomposing the mailbox, look at the way public stores are retrieved from storage groups. The method is almost the same as the one used to retrieve mailbox stores (see Sample 9), so it makes sense to look at them together. Getting Information About Public Folders In addition to mailbox stores, storage groups contain public stores, which contain public folders. Retrieving Exchange 2000 Public Store Information with CDOEXM This section refers to points 12 and M in Figure 3. Line 257 of Sample 8 calls the EnumPublicStoreDBs function with the CDOEXM.StorageGroup object as a parameter. This function retrieves the list of public stores in the storage group. The list of public stores is returned in a collection (see Sample 16, line 337). The "For Each" loop opens a data source (line 341) that corresponds to the distinguished name retrieved from the collection. For each public store, the script extracts the values of the object's properties (lines 348 to 362). Sample 16 Retrieving public store information with CDOEXM 330:Private Function EnumPublicStoreDBs (objStorageGroup) ...: ...: 335: Set objPublicStoreDB = CreateObject("CDOEXM.PublicStoreDB") 336: 337: For Each urlPublicStoreDB In objStorageGroup.PublicStoreDBs 338: DisplayText "urlPublicStoreDB(CDOEXM)", urlPublicStoreDB ...: ...: 341: objPublicStoreDB.DataSource.Open (urlPublicStoreDB) ...: ...: 348: DisplayText "Name(CDOEXM)", objPublicStoreDB.Name 349: DisplayText "DaysBeforeGarbageCollection", _ 350: objPublicStoreDB.DaysBeforeGarbageCollection 351: DisplayText "DaysBeforeItemExpiration", _ 352: objPublicStoreDB.DaysBeforeItemExpiration 353: DisplayText "GarbageCollectOnlyAfterBackup", _ 354: objPublicStoreDB.GarbageCollectOnlyAfterBackup 355: DisplayText "DBPath", objPublicStoreDB.DBPath 356: DisplayText "SLVPath", objPublicStoreDB.SLVPath 357: DisplayText "FolderTree", objPublicStoreDB.FolderTree 358: DisplayText "Status", objPublicStoreDB.Status 359: DisplayText "Enabled", objPublicStoreDB.Enabled 360: DisplayText "StoreQuota", objPublicStoreDB.StoreQuota 361: DisplayText "HardLimit", objPublicStoreDB.HardLimit 362: DisplayText "ItemSizeLimit", objPublicStoreDB.ItemSizeLimit 363: 364: ' At this level of the object hierarchy, 365: ' you can mount/dismount the public store. 366: ' objPublicStoreDB.Dismount 367: ' objPublicStoreDB.Mount 368: 369: ' At this level of the object hierarchy, 370: ' you can move the public store 371: ' to another location. 372: ' objPublicStoreDB.MoveDataFiles 373: 374: If cEnumFieldsCollection Then _ 375: EnumFieldsCollection objPublicStoreDB.Fields 376: 377: EnumFolderTree objPublicStoreDB.FolderTree ...: ...: 381: Next ...: ...: 386:End Function Note Lines 366, 367, and 372 are commented out. They provide information only. The script will not mount, dismount, or move data files when executed. Table 6 The CDOEXM object to manage the Exchange public store Lines 374 and 375 retrieve the ActiveX® Data Objects (ADO) Fields collection associated with a mailbox store. See Sample 24 for the EnumFieldsCollection function. Management Point 10 Retrieving Exchange 2000 public store information with CDOEXM At this stage of the script: You have an interface to retrieve the public folder tree name of the public store with the help of the IFolderTree CDOEXM interface. You know whether the public store is mounted or dismounted and its default state at the startup of Exchange. You have a set of information about the public store quotas. You have a set of information about the public store cleanup process. You can mount and dismount the public store and move the public store data files to another location. See points 12 and M in Figure 3 to locate this operation in the Exchange 2000 logical view. Retrieving Public Folder Tree Properties This section refers to points 13 and N in Figure 3. Line 56 of Sample 16 shows the URL of the public folder tree associated with a public store. The same parameter is used at line 377 to call the EnumFolderTree function. To retrieve public folder tree information, the script opens the data source (Sample 17, line 398) that corresponds to the distinguished name retrieved from the public folder property (Sample 16, lines 357 and 377). The script then extracts the values of properties in the retrieved public folder tree (lines 405 to 418). Sample 17 Retrieving public folder tree properties 389:Private Function EnumFolderTree (urlFolderTree) ...: ...: 394: Set objFolderTree = CreateObject("CDOEXM.FolderTree") 395: 396: DisplayText "FolderTree(CDOEXM)", urlFolderTree 397: 398: objFolderTree.DataSource.Open (urlFolderTree) ...: ...: 405: DisplayText "Name(CDOEXM)", objFolderTree.Name ...: ...: 409: For Each strReplica in objFolderTree.StoreDBs 410: DisplayText "Replicas(CDOEXM)", strReplica 411: Next ...: ...: 415: DisplayText "TreeType(CDOEXM)", objFolderTree.TreeType 416: DisplayText "RootFolderUrl(CDOEXM)", objFolderTree.RootFolderUrl 417: 418: If cEnumFieldsCollection Then _ EnumFieldsCollection objFolderTree.Fields 419: 420: EnumPublicStore objFolderTree.Name ...: ...: 425:End Function Table 7 The CDOEXM object to manage Exchange folder trees Line 418 retrieves the ADO Fields collection associated with a public folder tree. See Sample 24 for the EnumFieldsCollection function. Management Point 11 Retrieving public folder tree properties with CDOEXM At this stage of the script: You have public folder tree information for a public store. You know the tree type of the root folder tree and other tree properties. You have the RootFolderURL to the root public folder of the folder tree. See points 13 and N in Figure 3 to locate this operation in the Exchange 2000 logical view. Line 420 of Sample 17 uses the public folder tree name as a parameter to explore the public folder tree's content. This is where the script really enters the Exchange store. Getting Information about Folder Trees in Mailboxes and Public Folders The following sections discuss exploring the folders inside mailboxes and public folders. Building the Exchange Store Path to Explore the Content of a Mailbox Folder This section refers to points 10 and K in Figure 3. Sample 18 continues the script in Sample 11, which accesses a mailbox from an ADSI.User object. Sample 19 continues the script in Sample 13, which accesses a mailbox from a CDO.Person object. Whatever the origin of the object holding the mailbox configuration, both scripts use the same method to access the mailbox content. First, the script tests the user's identity. Because the Exchange store security model does not, by default, allow access to the content of all mailboxes, the user who is currently logged on has access to his or her mailbox only. The script does not work unless Microsoft® Windows® 2000 and Microsoft® Exchange 2000 Server have been installed with a user name identical to the mailbox alias (see Sample 18, line 231, and Sample 19, line 160). If this is the case, the BrowseStoreFolder function can be called. The parameter passed to this function is the URL that opens the Exchange store. Note All scripts in this document must be run on an Exchange 2000 server. No remote access to the Exchange store is possible through the Exchange 2000 OLE DB Provider. You can also access the Exchange store through the Installable File System (IFS) by browsing the file system folder tree (see Figure 4). This method is practicable, but only the stream of each item is accessible and item content is not parsed. To access item properties, an alternate mechanism must be used, such as the World Wide Web Distributed Authoring and Versioning (WebDAV) protocol, the Exchange OLE DB provider (ExOLEDB) with ADO 2.5, CDO for Exchange 2000 (CDOEX), or the Messaging API (MAPI). The URL for a mailbox has the form: This URL is constructed at line 236 of Sample 18 and line 165 of Sample 19. The MailboxAlias is retrieved at lines 60 and 61 of Sample 10 for ADSI and at lines 59 and 60 of Sample 12 for CDOEX. Example.Com is the canonicalName LDAP property name retrieved at lines 89 and 90 of Sample 6. The BrowseStoreFolder function is called with the URL as a parameter. Sample 18 Building the Exchange store pointer to explore the folder content in a mailbox through ADSI ...: ...: 220: ' At this level of the object hierarchy, you can create, 221: ' move, or delete a mailbox from the mailbox store. 222: ' objUser.CreateMailBox 223: ' objUser.DeleteMailBox 224: ' objUser.MoveMailBox 225: 226: ' -------------------------------------------------------------------- 227: ' Access the mailbox of the user who is currently logged on. 228: ' This will not be a security problem; the mailbox is owned by the ' user running the script, because the user name is used for the ' e-mail alias. 229: ' CONDITION: The mailbox alias and the user name must be the 230: ' same, because you get the current user name (account name) from WSH. 231: If strCurrentUserName = strAlias And cEnumMaiboxTree Then 232: 233: WScript.Echo Space (intX) & "Browsing mailbox '" & _ strDisplayName & "'." 234: 235: ' Construct the pointer for the mailbox 236: BrowseStoreFolder "" & _ 237: strCanonicalNameDefaultDomain & "MBX/" & strAlias 238: 239: End If ...: ...: 243:Next ...: ...: 249:End Function Sample 19 Building the Exchange store pointer to explore the folder content in a mailbox through CDOEX ...: ...: 147: ' At this level of the object hierarchy, you can create, 148: ' move, or delete a mailbox from the mailbox store. 149: ' objMailbox.CreateMailBox 150: ' objMailbox.DeleteMailBox 151: ' objMailbox.MoveMailBox 152: 153: Set objMailbox = Nothing 154: 155: ' -------------------------------------------------------------------- 156: ' Access the mailbox of the user who is currently logged on. 157: ' This will not be a security problem; the mailbox is owned by the 158: ' user running the script, because the user name is used for the 159: ' mailbox alias. ' CONDITION: The mailbox alias and the user name must be the same ' because you get the current user name (account name) from WSH. 160: If strCurrentUserName = strAlias And cEnumMaiboxTree Then 161: 162: WScript.Echo Space (intX) & "Browsing mailbox '" & _ strDisplayName & "'." 163: 164: ' Construct the pointer for the mailbox 165: BrowseStoreFolder "" & _ 166: strCanonicalNameDefaultDomain & "MBX/" & strAlias 167: 168: End If ...: ...: 172:Next ...: ...: 179:End Function Building the Exchange Store Pointer to Explore Folder Content in a Public Folder Hierarchy This section refers to points 14 and O in Figure 3. Sample 20 uses the same logic as the method that constructs the URL for the mailbox. The only difference is the use of the folder tree name. See line 377 of Sample 16. This part of the code determines which folder tree the public folder store belongs to. Note The RootFolderURL property, available from the CDOEXM.FolderTree object, can be used to bind ADO to the root folder of the public store (see Table 7). However, to reuse the logic for retrieving mailbox content, this script does not use the RootFolderURL. Instead it builds the URL in the same way as in Sample 18 and Sample 19. The folder tree name is a distinguished name used as a data source at line 398 of Sample 17. In Sample 17, the name of the folder tree is used as a parameter to call the EnumPublicStore function (line 420). The form of the URL for a public folder is: This URL is constructed at lines 435 and 436. Note In a real production environment, this script collects a huge amount of data from the public folder hierarchy. For educational purposes, you should run this script from an Exchange 2000 server in a test installation. Next, the BrowseStoreFolder function is called with the URL as a parameter. Sample 20 Building the Exchange store pointer to explore the content of folders in a public folder hierarchy 428:Private Function EnumPublicStore (strFolderTreeName) 429: 430: If cEnumFolderTree Then 431: intX = intX + 1 432: WScript.Echo Space (intX) & "Browsing '" & strFolderTreeName & "'" 433: 434: ' Construct the pointer for the ' public folder. 435: BrowseStoreFolder "" & _ 436: strCanonicalNameDefaultDomain & strFolderTreeName 437: intX = intX - 1 438: End If 439: 440:End Function Exploring Folder Content This section refers to points 10, 14, K, and O in Figure 3. The BrowseStoreFolder function has the URL as a parameter. This parameter is contained in the strURL variable and is an entry-point into the Exchange store (see line 18 of Sample 21). Note Under Exchange 5.5, the access method to the Information Store was based on MAPI and made use of CDO 1.21. This client/server access method is still supported under Exchange 2000, but because many scripts and applications run locally on the server, Exchange 2000 provides a local access method as well. When running locally, an application or a script can access the Exchange store with ADO 2.5. CDOEX is not designed for this type of access. It has all the objects and interfaces needed to access, manage, and decompose items in the Exchange store, but it does not have a navigation model for exploration. ADO 2.5 performs this function. Lines 16 and 18 of Sample 21 present a simplified method for opening the Exchange store. This simplicity hides the fact that many parameters are assumed for the ADO connection. The following three methods achieve the same results. Method 1 Set objRecord = CreateObject("ADODB.Record") objRecord.Open strURL Method 2 Set objConnection = CreateObject("ADODB.Connection") Set objRecord = CreateObject("ADODB.Record") objConnection.ConnectionString = strUrl objConnection.Provider = "EXOLEDB.DATASOURCE" objConnection.Open objRecord.Open , objConnection Method 3 Set objConnection = CreateObject("ADODB.Connection") Set objRecord = CreateObject("ADODB.Record") objConnection.Open "Provider=EXOLEDB.DATASOURCE; Data Source=" & strURL & ";" objRecord.Open , objConnection All these methods open the Exchange store in the same way. Method 1 looks simpler because it assumes certain defaults. The strURL contains a URL. Because the Exchange OLE DB provider is registered in the system for this namespace, there is no need to specify the ExOLEDB provider name. Behind the scenes, the objRecord object creates an objConnection object. This is how objRecord connects the script to the Exchange store. Method 2 does not assume defaults. Instead, the script explicitly defines which OLE DB provider to use (ExOLEDB). Method 3 uses exactly the same parameters as Method 2, but Method 3 uses an ADO connection string. These methods are less ambiguous than Method 1. With Methods 2 and 3 you are always sure that the script will use ExOLEDB, because ExOLEDB accepts different forms of Exchange store URLs ( and http://). The Exchange store URL does not have to be. You can also use an URL. The RootFolderURL property from the CDOEXM.FolderTree returns an http:// URL to the root of a public folder tree. In such a case, you must use either Method 2 or Method 3. Method 1 will not work because the default OLE DB provider for this namespace is the Microsoft OLE DB Provider for Internet Publishing (MSDAIPP) and it will not allow a connection to the Exchange store. Note An http:// URL given to ADO refers by default to MSDAIPP, unless you specify a different OLE DB provider. MSDAIPP allows developers to access files through OLE DB interfaces on HTTP servers that support the Microsoft® FrontPage® Web Extender Client (WEC) or Web Distributed Authoring and Versioning (WebDAV) protocol extensions. The MSDAIPP provider is currently installed with the full version of Microsoft® Internet Explorer 5, which comes with Office 2000 or can be downloaded separately. MSDAIPP is built on top of WebDAV, and there is no other option for its use. CDOEX is not supported on top of MSDAIPP. Because the OLE DB provider is assumed rather than specified in Method 1, the namespace of the OLE DB pointer is a determinant. With an http:// URL, the script must use Method 2 or Method 3 because both methods specify an OLE DB provider. Even if the OLE DB pointer is an http:// URL, you cannot make a remote connection to the Exchange 2000 OLE DB provider. To make remote connections to the Exchange store, either WebDAV or MAPI must be used. Sample 21 Exploring the Folders content 1:' VBScript function to enumerate folder content using ADO 2.5 on an 2:' Exchange server by using the pointer. .: 9:Option Explicit 10: 11:' ------------------------------------------------------------------------- 12:Private Function BrowseStoreFolder (strURL) 13: 14:Dim objRecord 15: 16: Set objRecord = CreateObject("ADODB.Record") 17: 18: objRecord.Open strURL..: ..: 22: LoopInFolder objRecord 23: 24: objRecord.Close ..: ..: 29:End Function 30: 31:' -------------------------------------------------------------------------- 32:Private Function LoopInFolder (objParentRecord) ..: ..: 39: Set objRecordSet = objParentRecord.GetChildren 40: 41: While Not objRecordSet.EOF 42: 43: Set objChildRecord = CreateObject("ADODB.Record") 44: 45: objChildRecord.Open objRecordSet ..: ..: 54: If cEnumFieldsCollection Then _ EnumFieldsCollection objChildRecord.Fields 55: 56: If objRecordSet ("DAV:iscollection") Then 57: LoopInFolder objChildRecord 58: Else 59: If objChildRecord ("DAV:contentclass") = _ "urn:content-classes:message" Then 60: If cEnumMessageContent Then _ EnumMessageContent objChildRecord("DAV:href") 61: End If 62: End If 63: 64: objChildRecord.Close ..: ..: 69: objRecordSet.MoveNext 70: Wend 71: 72: objRecordSet.Close ..: ..: 77:End Function After the Exchange store is opened (line 18), the loop in the folder hierarchy begins with the call to the LoopInFolder function (line 22). The LoopInFolder function uses some ADO 2.5 enhancements to the navigation model. For example, line 39 uses the GetChildren method to get a recordset of the objects located under the current record. Note The CopyRecord and MoveRecord methods are other useful ADO 2.5 enhancements. These methods enable copy and move operations in the Exchange store. See the ADO 2.5 SDK for more information about these methods and their related options. The first LoopInFolder call passes the URL, a root pointer to a public folder or mailbox. If points to a public folder, the script gets the list of its items and folders. If it points to a mailbox, the script gets the list of folders contained in the mailbox, such as Inbox, Contacts, Calendar, and so on. Next, the script loops to examine every item in the objRecordSet object (lines 41 to 70). The script opens each item (line 45) and enumerates its Fields collection (line 54). To determine whether the examined item contains other items (such as folders), the DAV:iscollection property from the Fields collection is tested. This property returns True if the items contain other items. The script then iterates through each item with the LoopInFolder function. If the item is not a collection, it is a data object, also called a leaf object. A data object can be an e-mail message, a calendar item, a contact item, a document, and so on. Line 59 decomposes a message item to illustrate how a message is built and what the message object model offers to facilitate management tasks. The script can be easily modified to examine a calendar item or a contact item. Every item discovered in the loop has a content class property in its Fields collection. Therefore, you can review the content type and other properties of each item in the Excel spreadsheet (line 54). After the item is explored, the script gets the next one (line 69) and the loop continues. The Fields collection DAV:href property is passed as a parameter that corresponds to the content class of an e-mail message. This property contains a URL to the message. See Sample 23 to examine the content of an e-mail message. Management Point 12 The Folder tree exploration At this stage of the script: You've retrieved the content of a folder and its children with the GetChildren method. You can move or copy a complete folder with its contents and its children with the MoveRecord and CopyRecord methods. See points 10, 14, K, and O in Figure 3 to locate this operation in the Exchange 2000 logical view. Performing Simple Exchange Management Tasks with Scripts Creating Mail-Enabled Folders This section refers to points 15, 16, 17, P, Q, and R in Figure 3. Sample 21 shows how to access an Exchange store folder. It can also be useful to create and then mail-enable folders in a public folder hierarchy. Mail-enabled folders can receive mail, including alert messages announcing important changes in the server. (See "Monitoring Exchange 2000 Server Activity" later in this document for information about server monitoring.) Alerts are then logged and archived in the public folder. With the IMailRecipient interface (see also Sample 15), you can enable an e-mail address on a folder in the Exchange store. This is the purpose of Sample 22. Sample 22 Enabling e-mail on a folder located in a public store 1:<!—- VBScript script to create a subfolder in the public folder tree. --> 2:<!-- After the folder is created or opened, e-mail enable this --> <!-- subfolder. --> .: 9:<job> 10: 11: <object Id="objConnection" ProgID="ADODB.Connection" Reference=True /> 12: <object Id="objRecord" ProgID="ADODB.Record" Reference=True /> 13: <object Id="objFolder" ProgID="CDO.Folder" Reference=True /> 14: 15: <script language="VBScript"> 16: 17: Option Explicit 18: 19: Const cDomain = "Example.Com" 20: Const cFolderName = "MyMail-EnabledFolder" ..: 27: strURLOpen = "" & _ 28: cDomain & "/Public Folders" 29: strURLItem = "" & _ 30: cDomain & "/Public Folders/" & cFolderName 31: 32: Wscript.Echo strURLOpen 33: 34: objConnection.Open "Provider=EXOLEDB.DATASOURCE; Data Source=" & _ strURLOpen & ";" 35: 36: Wscript.Echo strURLItem 37: 38: objRecord.Open strURLItem, _ 39: objConnection, _ 40: adModeReadWrite, _ 41: adCreateCollection Or adOpenIfExists 42: 43: objFolder.DataSource.Open strURLItem 44: 45: Set objMailRecipient = objFolder.GetInterface ("IMailRecipient") 46: 47: objMailRecipient.MailEnable 48: objFolder.DataSource.Save 49: 50: Set objMailRecipient = Nothing 51: 52: objRecord.Close 53: 54: Wscript.Echo "Completed." 55: 56: </script> 57:</job> Sample 23 has three points of interest: It uses an object instantiation capability provided by WSH, instead of the traditional CreateObject function from the Microsoft® Visual Basic® run-time libraries (lines 11, 12, and 13). Note the reference parameter set to True. This allows the retrieval of type library definitions associated with each object. This is why the script can use the three constants at lines 40 and 41 without the Const declaration statement. It uses ADO 2.5 to open a data source on the parent folder level. In the sample case, the data source is the root public folder itself (line 34). The script opens a Record object with the connection established at line 34 to create the desired folder (lines 38 to 41). Note the "adCreateCollection Or adOpenIfExists" statement, which opens a folder if one exists or creates a new one if it doesn't. After the folder is created (lines 38 to 41), the CDO.Folder object instantiated at line 13 opens a data source with the URL of the new folder (line 43). Next, the CDO.Folder object is aggregated to the IMailRecipient interface (line 45) to enable its e-mail address. The logic is the same as in prior samples (see Sample 14 and Sample 15). Management Point 13 Enabling e-mail on a folder At this stage of the script: You can create a folder. You can mail-enable or mail-disable a folder from the CDO.Folder object. See points 15, 16, 17, P, Q, and R in Figure 3 to locate this operation in the Exchange 2000 logical view. Examining Message Content This section refers to points 11, 18, L, and S in Figure 3. The message URL is contained in the strURL variable passed as a parameter to the EnumMessageContent function (see line 21 of Sample 23). CDO for Exchange 2000 (CDOEX) uses this URL in line 27 to access the message item instantiated at line 26. Lines 34 to 52 review the message properties. Line 57 retrieves the ADO Fields collection associated with the CDO message object. See Sample 24 for the EnumFieldsCollection function. Sample 23 Examining message content 1:' VBScript function enumerating message content from a CDOEX object. ' .: 6:' This function needs an inclusion in the parent of the following: ' 7:' ' 8:' Functions: ' 9:' ' 10:' EnumFieldsCollectionFunction.vbs ' 11:' ' 12:' Constants: ' 13:' ' 14:' cEnumFieldsCollection ' ..: 20:' ----------------------------------------------------------------------- 21:Function EnumMessageContent (strURL) ..: ..: 26: Set objMessage = CreateObject ("CDO.Message") 27: objMessage.DataSource.Open strURL ..: ..: 34: DisplayText "AutoGenerateTextBody(CDOEX)", _ objMessage.AutoGenerateTextBody 35: DisplayText "BCC(CDOEX)", objMessage.BCC 36: DisplayText "CC(CDOEX)", objMessage.CC 37: DisplayText "DSNOptions(CDOEX)", objMessage.DSNOptions 38: DisplayText "FollowUpTo(CDOEX)", objMessage.FollowUpTo 39: DisplayText "From(CDOEX)", objMessage.From 40: DisplayText "HTMLBody(CDOEX)", objMessage.HTMLBody 41: DisplayText "Keywords(CDOEX)", objMessage.Keywords 42: DisplayText "MDNRequested(CDOEX)", objMessage.MDNRequested 43: DisplayText "MIMEFormatted(CDOEX)", objMessage.MIMEFormatted 44: DisplayText "Newsgroups(CDOEX)", objMessage.Newsgroups 45: DisplayText "Organization(CDOEX)", objMessage.Organization 46: DisplayText "ReceivedTime(CDOEX)", objMessage.ReceivedTime 47: DisplayText "ReplyTo(CDOEX)", objMessage.ReplyTo 48: DisplayText "Sender(CDOEX)", objMessage.Sender 49: DisplayText "SentOn(CDOEX)", objMessage.SentOn 50: DisplayText "Subject(CDOEX)", objMessage.Subject 51: DisplayText "TextBody(CDOEX)", objMessage.TextBody 52: DisplayText "To(CDOEX)", objMessage.To ..: ..: 57: If cEnumFieldsCollection Then EnumFieldsCollection objMessage.Fields ..: ..: 62: EnumFieldsCollection objMessage.Configuration.Fields ..: ..: 65: ' At this level of the object hierarchy, it is possible to send, 66: ' reply, forward, add an attachment, and so on, to a message. 67: ' objMessage.AddAttachment 68: ' objMessage.AddRelatedBodyPart 69: ' objMessage.BodyPart 70: ' objMessage.CreateMHTMLBody 71: ' objMessage.Forward 72: ' objMessage.Post 73: ' objMessage.PostReply 74: ' objMessage.Reply 75: ' objMessage.ReplyAll 76: ' objMessage.Send ..: ..: 83:End Function Line 62 retrieves the associated Fields collection through the IConfiguration interface. This interface contains the communication parameters for the message, including the SMTP Server name, the SMTP Server port, the Authentication type, and so on. For more information about the IConfiguration interface, see the Microsoft Platform SDK at. The message object also contains methods to forward, reply to, post, and send messages (lines 67 to 76). Management Point 14 The message object At this stage of the script: You know the configuration of a message in the Exchange store. You can send, reply to, and forward messages. See points 11, 18, L, and S in Figure 3 to locate this operation in the Exchange 2000 logical view. Going Deeper into the Exchange 2000 Objects Hierarchy The script in Sample 3 examines many, but not all, aspects of the Exchange 2000 CDOEX COM objects hierarchy. For example, the script does not explore the calendaring, contacts, journal, and notes folders, or their associated objects. The script also stops at the message object level. You can, however, go deeper by looking at the message body parts. If the message is a Multipurpose Internet Mail Extensions (MIME) message (Sample 23, line 43), you can use the IBodyPart interface (line 69) to decompose it. The purpose of EnumAllInXL.wsf is to further explore the Exchange 2000 COM objects hierarchy in Figure 3. The script shows which COM technologies to use to manage specific components. You can retrieve additional information by adding the following items to previous sample scripts. A SelectCase statement at line 59 of Sample 21. This allows to you to explore the content of objects of different types. An IbodyPart interface exploration process, starting with the message object in Sample 23. Output Result While EnumAllInXL.wsf runs and loads information into an Excel spreadsheet, the script displays the different steps executed during the exploration. For example, in Figure 6, the script shows that Exchange 2000 server information is retrieved from WMI (line 6) and CDOEXM (line 7). In this example, the script runs on a computer named DC01-CPQ. The Exchange 2000 configuration used for the sample output in Figure 6 has four storage groups. The First Storage Group (line 9) contains: Mailbox Store A (line 11). Mailbox Store B (line 81). Public Folder Store A (line 87). Hierarchy Name: 'Public Folders' Public Folder Store E (line 119). Hierarchy Name: 'Public Folders Tree (E)' The Second Storage Group (line 125) contains: Mailbox Store C (line 127). Public Folder Store C (line 133). Hierarchy Name: 'Public Folders Tree (C)' The Third Storage Group (line 139). Mailbox Store D (line 141). Public Folder Store D (line 147). Hierarchy Name: 'Public Folders Tree (D)' The Fourth Storage Group (line 153). Mailbox Store E (line 155). Public Folder Store B (line 161). Hierarchy Name: 'Public Folders Tree (B)' DC01-CPQ (lines 6 and 7) is one of two servers in an Exchange 2000 organization. DC02-CPQ (lines 126 and 127) is the other. Lines 19 to 53 examine the mailbox of the user who is currently logged on and running the script. The messages in the mailbox have their properties loaded into an Excel spreadsheet (lines 36 to 41 and lines 48 to 53). Figure 6 The EnumE2KInXL sample output 1:Microsoft Windows Script Host Version 5.1 for Windows 2:Copyright Microsoft Corporation. All rights reserved. 3: 4:Found Exchange Organization called 'First Organization'. 5: 6:Reading Exchange Server 'DC01-CPQ' information via 'WMI'. 7:Reading Exchange server 'DC01-CPQ' information via 'CDOEXM.ExchangeServer'. 8:ADO Field count=37 9:Reading Storage Group 'First Storage Group' information via 'CDOEXM.StorageGroup'. 10:ADO Field count=33 11:Reading Mailbox Store DB 'Mailbox Store A' information via 'CDOEXM.MailBoxStoreDB'. 12:ADO Field count=39 13:Retrieving Person/mailbox list via 'ADSI/ADO' LDAP Query. 14:Retrieving Person information 'SystemMailbox{70F...}' via 'CDO.Person' 15:ADO Field count=51 16:Retrieving mailbox information 'SystemMailbox{70F...}' via 'IMailboxStore CDOEXM' interface. 17:Retrieving Person information 'LISSOIR Alain' via 'CDO.Person' 18:ADO Field count=55 19:Retrieving mailbox information 'LISSOIR Alain' via 'IMailboxStore CDOEXM' interface. 20:Browsing mailbox 'LISSOIR Alain'. 21:Browsing '' via 'ADO'. 22:urn:content-classes:taskfolder='Tasks' 23:ADO Field count=72 24:urn:content-classes:notefolder='Notes' 25:ADO Field count=72 26:urn:content-classes:journalfolder='Journal' 27:ADO Field count=72 28:urn:content-classes:mailfolder='Drafts' 29:ADO Field count=157 30:urn:content-classes:contactfolder='Contacts' 31:ADO Field count=188 32:urn:content-classes:calendarfolder='Calendar' 33:ADO Field count=116 34:urn:content-classes:folder='Sent Items' 35:ADO Field count=72 36:urn:content-classes:message='TEST.EML' 37:ADO Field count=156 38:Examining message content via 'CDOEX'. 39:ADO Field count=67 40:Examining message Configuration via 'CDOEX'. 41:ADO Field count=15 42:urn:content-classes:folder='Deleted Items' 43:ADO Field count=72 44:urn:content-classes:folder='Outbox' 45:ADO Field count=72 46:urn:content-classes:mailfolder='Inbox' 47:ADO Field count=157 48:urn:content-classes:message='TEST.EML' 49:ADO Field count=157 50:Examining message content via 'CDOEX'. 51:ADO Field count=67 52:Examining message Configuration via 'CDOEX'. 53:ADO Field count=15 54:Retrieving Person information 'HIGHTOWER Kim' via 'CDO.Person' 55:ADO Field count=70 56:Retrieving mailbox information 'HIGHTOWER Kim' via 'IMailboxStore CDOEXM' interface. 57:Retrieving Person information 'MITCHELL Linda' via 'CDO.Person' 58:ADO Field count=70 59:Retrieving mailbox information 'MITCHELL Linda' via 'IMailboxStore CDOEXM' interface. 60:Retrieving Person information 'COOPER Scott' via 'CDO.Person' 61:ADO Field count=69 62:Retrieving mailbox information 'COOPER Scott' via 'IMailboxStore CDOEXM' interface. 63:Retrieving Person information 'MUGHAL Salman' via 'CDO.Person' 64:ADO Field count=70 65:Retrieving mailbox information 'MUGHAL Salman' via 'IMailboxStore CDOEXM' interface. 66:Retrieving Person information 'PHUA Meng' via 'CDO.Person' 67:ADO Field count=70 68:Retrieving mailbox information 'PHUA Meng' via 'IMailboxStore CDOEXM' interface. 69:Retrieving Person information 'WOOD John' via 'CDO.Person' 70:ADO Field count=70 71:Retrieving mailbox information 'WOOD John' via 'IMailboxStore CDOEXM' interface. 72:Retrieving Person information 'SCHNEIDER Detlef' via 'CDO.Person' 73:ADO Field count=70 74:Retrieving mailbox information 'SCHNEIDER Detlef' via 'IMailboxStore CDOEXM' interface. 75:Retrieving Person information 'SEIDL Birgit' via 'CDO.Person' 76:ADO Field count=69 77:Retrieving mailbox information 'SEIDL Birgit' via 'IMailboxStore CDOEXM' interface. 78:Retrieving Person information 'WEST Paul' via 'CDO.Person' 79:ADO Field count=70 80:Retrieving mailbox information 'WEST Paul' via 'IMailboxStore CDOEXM' interface. 81:Reading Mailbox Store DB 'Mailbox Store B' information via 'CDOEXM.MailBoxStoreDB'. 82:ADO Field count=39 83:Retrieving Person/mailbox list via 'ADSI/ADO' LDAP Query. 84:Retrieving Person information 'SystemMailbox{6AA...}' via 'CDO.Person' 85:ADO Field count=51 86:Retrieving mailbox information 'SystemMailbox{6AA...}' via 'IMailboxStore CDOEXM' interface. 87:Reading Public Store DB 'Public Folder Store A' information via 'CDOEXM.PublicStoreDB'. 88:ADO Field count=48 89:Retrieving FolderTree 'Public Folders' information via 'CDOEXM'. 90:ADO Field count=21 91:Browsing 'Public Folders' tree. 92:Browsing ' Folders' via 'ADO'. 93:urn:content-classes:folder='MyFolder' 94:ADO Field count=72 95:urn:content-classes:mailfolder='Services' 96:ADO Field count=157 97:urn:content-classes:mailfolder='Professional Services' 98:ADO Field count=157 99:urn:content-classes:message='Part 2 - The combination of WSH-ADSI under Windows 2000.EML' 100:ADO Field count=157 101:Examining message content via 'CDOEX'. 102:ADO Field count=65 103:Examining message Configuration via 'CDOEX'. 104:ADO Field count=15 105:urn:content-classes:message='Part 1 - Understanding WSH and ADSI in Windows 2000.EML' 106:ADO Field count=157 107:Examining message content via 'CDOEX'. 108:ADO Field count=65 109:Examining message Configuration via 'CDOEX'. 110:ADO Field count=15 111:urn:content-classes:mailfolder='Customer Services' 112:ADO Field count=157 113:urn:content-classes:mailfolder='Marketing' 114:ADO Field count=157 115:urn:content-classes:mailfolder='Corporate Templates' 116:ADO Field count=157 117:urn:content-classes:folder='Internet Newsgroups' 118:ADO Field count=72 119:Reading Public Store DB 'Public Folder Store E' information via 'CDOEXM.PublicStoreDB'. 120:ADO Field count=44 121:Retrieving FolderTree 'Public Folders Tree (E)' information via 'CDOEXM'. 122:ADO Field count=19 123:Browsing 'Public Folders Tree (E)' tree. 124:Browsing ' Folders Tree (E)' via 'ADO'. 125:Reading Storage Group 'Second Storage Group' information via 'CDOEXM.StorageGroup'. 126:ADO Field count=32 127:Reading Mailbox Store DB 'Mailbox Store C' information via 'CDOEXM.MailBoxStoreDB'. 128:ADO Field count=39 129:Retrieving Person/mailbox list via 'ADSI/ADO' LDAP Query. 130:Retrieving Person information 'SystemMailbox{A99...}' via 'CDO.Person' 131:ADO Field count=51 132:Retrieving mailbox information 'SystemMailbox{A99...}' via 'IMailboxStore CDOEXM' interface. 133:Reading Public Store DB 'Public Folder Store C' information via 'CDOEXM.PublicStoreDB'. 134:ADO Field count=44 135:Retrieving FolderTree 'Public Folders Tree (C)' information via 'CDOEXM'. 136:ADO Field count=19 137:Browsing 'Public Folders Tree (C)' tree. 138:Browsing ' Folders Tree (C)' via 'ADO'. 139:Reading Storage Group 'Third Storage Group' information via 'CDOEXM.StorageGroup'. 140:ADO Field count=32 141:Reading Mailbox Store DB 'Mailbox Store D' information via 'CDOEXM.MailBoxStoreDB'. 142:ADO Field count=39 143:Retrieving Person/mailbox list via 'ADSI/ADO' LDAP Query. 144:Retrieving Person information 'SystemMailbox{CE2...}' via 'CDO.Person' 145:ADO Field count=51 146:Retrieving mailbox information 'SystemMailbox{CE2...}' via 'IMailboxStore CDOEXM' interface. 147:Reading Public Store DB 'Public Folder Store D' information via'CDOEXM.PublicStoreDB'. 148:ADO Field count=44 149:Retrieving FolderTree 'Public Folders Tree (D)' information via 'CDOEXM'. 150:ADO Field count=19 151:Browsing 'Public Folders Tree (D)' tree. 152:Browsing ' Folders Tree (D)' via 'ADO'. 153:Reading Storage Group 'Fourth Storage Group' information via 'CDOEXM.StorageGroup'. 154:ADO Field count=32 155:Reading Mailbox Store DB 'Mailbox Store E' information via 'CDOEXM.MailBoxStoreDB'. 156:ADO Field count=39 157:Retrieving Person/mailbox list via 'ADSI/ADO' LDAP Query. 158:Retrieving Person information 'SystemMailbox{CBD...}' via 'CDO.Person' 159:ADO Field count=51 160:Retrieving mailbox information 'SystemMailbox{CBD...}' via 'IMailboxStore CDOEXM' interface. 161:Reading Public Store DB 'Public Folder Store B' information via 'CDOEXM.PublicStoreDB'. 162:ADO Field count=44 163:Retrieving FolderTree 'Public Folders Tree (B)' information via 'CDOEXM'. 164:ADO Field count=19 165:Browsing 'Public Folders Tree (B)' tree. 166:Browsing ' Folders Tree (B)' via 'ADO'. 167:Reading Exchange Server 'DC02-CPQ' information via 'WMI'. 168:Reading Exchange server 'DC02-CPQ' information via 'CDOEXM.ExchangeServer'. 169:ADO Field count=37 Formatting the Collected Data The EnumFieldCollection function helps format the retrieved data by enumerating a Fields collection that is passed as a parameter. The only restriction applied by this function is that it does not format all the elements of arrays. When a retrieved property is an array, the function assigns a string <array> to the value loaded in Excel. This is simply to minimize the amount of data to format. Sample 24 Enumerating the Fields collection and formatting the collected data 1:' VBScript function enumerating the Fields collection from a CDO object. 2:' If the value is printable, the content is displayed; otherwise the 3:' variant value type is displayed. 4:' 5:' This function uses a DisplayText function defined in the caller module. ..: 12:Option Explicit 13: 14:' ----------------------------------------------------------------------- 15:Function EnumFieldsCollection (objFields) ..: ..: 25: For Each objField In objFields 26: 27: If IsArray (objFields (objField.Name)) Then ..: 33: DisplayField objField.Name, "<Array>" ..: 36: Else 37: DisplayField objField.Name, objFields (objField.Name) 38: End If 39: 40: Next ..: 44:End Function 45: 46:' ----------------------------------------------------------------------- 47:Function DisplayField (strName, varValue) 48: 49:Dim strToDisplay 50: 51: Select Case VarType (varValue) 52: Case vbInteger, _ 53: vbLong, _ 54: vbSingle, _ 55: vbDouble, _ 56: vbCurrency, _ 57: vbDate, _ 58: vbString, _ 59: vbBoolean 60: strToDisplay = varValue 61: 62: ' Display the type instead of the value when the returned ' type prevents it from being displayed directly. 63: Case vbEmpty 64: strToDisplay = "(vbEmpty)" 65: Case vbNull 66: strToDisplay = "(vbNull)" 67: Case vbObject 68: strToDisplay = "(vbObject)" 69: Case vbError 70: strToDisplay = "(vbError)" 71: Case vbVariant 72: strToDisplay = "(vbVariant)" 73: Case vbDataObject 74: strToDisplay = "(vbDataObject)" 75: Case vbByte 76: strToDisplay = "(vbByte)" 77: Case vbArray 78: strToDisplay = "(vbArray)" 79: 80: Case Else 81: strToDisplay = "(VariantType:" & _ VarType (varValue) & ")" 82: 83: End Select 84: 85: DisplayText strName & "(ADO Field)", strToDisplay 86: 87:End Function Advanced Script Samples The remainder of this white paper uses the COM technologies explored in previous sections to build two advanced scripts. You can use these scripts as they are or as the basis for your own management script for Exchange 2000. Moving Exchange 2000 Mailboxes Based on an LDAP Query Sample 11 and Sample 13 demonstrate how to use the IMailboxStore interface aggregated to an ADSI.User object or to a CDO.Person object. The same scripts show the MoveMailbox method associated with these interfaces. Sample 10 and Sample 12 demonstrate the ADSearchFunction, which retrieves mailboxes and their associated object properties from ADSI and CDOEX. You can combine the ADSearchFunction with the MoveMailbox method to write a script that moves mailboxes between mailbox stores, storage groups, and Exchange 2000 servers, based on the results of an LDAP query. For example, if you want to move all users in a department to one server or to a particular Exchange store, you can run an LDAP query in Active Directory and move the users found by this query to your chosen location. This is the purpose of the next sample. Sample 25 uses ADSI to access a mailbox. First, the script retrieves the current computer name (lines 89 to 92). From lines 115 to 120, the script retrieves the required parameters to perform a mailbox move operation. The first parameter is the LDAP Query string (line 115). The ADSearch function uses this string as a filter to perform the query (lines 152 to 156), so the query syntax must be LDAP compliant. The query is executed in the default Active Directory context (lines 126 and 153), which corresponds to the domain membership of the computer that runs the script. Therefore, only the user's part of this domain is retrieved. Next, the distinguished name of the target mailbox store is constructed (lines 131 to 139), based on the parameters in the command line. Line 131 specifies the mailbox store name retrieved from line 116. Line 132 specifies the storage group of the given mailbox store, which is retrieved from line 117. Line 134 specifies the server name of the given storage group, which is retrieved from line 118. Line 135 specifies the administration group name of the given server name, which is retrieved from line 119. Line 137 specifies the organization name of the given administration group, which is retrieved from the GetExchangeOrg function call (line 120). For more information about the GetExchangeOrg function, see Sample 50. Line 139 specifies the root domain of the tree retrieved at line 127. This completes the mailbox store distinguished name. In this case, only the root domain name can be used because the Active Directory Configuration Naming Context is always lower than the root domain name of the tree in the directory structure. Sample 25 Moving mailboxes from an ADSI.User object based on a LDAP query 1:<!-- VBScript script making LDAP searches in Active Directory on a --> 2:<!-- user objectClass. Matching users are moved from their current --> 3:<!-- Exchange store to the given store. The script uses the ADSI User --> 4:<!-- object that is attached to the mailbox for the move operation. --> ..: 45:<job> 46: <script language="VBScript" src="ADSearchFunction1.vbs" /> 47: <script language="VBScript" src="GetMSExchangeOrgFunction.vbs" /> 48: <script language="VBScript" src="TinyErrorHandler.vbs" /> 49: 50: <script language="VBScript"> 51: 52: Option Explicit ..: 85: ' ------------------------------------------------------------------- 86: On Error Resume Next ..: 89: Set WNetwork = Wscript.CreateObject("Wscript.Network") 90: strComputerName = WNetwork.ComputerName 91: Wscript.DisconnectObject (WNetwork) 92: Set WNetwork = Nothing 93: 94: Set objArguments = Wscript.Arguments 95: 96: If objArguments.Count <> 5 Then 97: WScript.Echo "Usage:" 98: Wscript.Echo " QueryAndMoveMBTo " & _ 99: chr(34) & "(LDAPQueryFilter)" & chr(34) & " " & _ 100: chr(34) & "TargetStore" & chr(34) & " " & _ 101: chr(34) & "TargetStorageGroup" & chr(34) & " " & _ 102: chr(34) & "TargetServer" & chr(34) & " " & _ 103: chr(34) & "TargetAdministrativeGroup" & chr(34) 104: Wscript.Echo 105: Wscript.Echo "Sample:" 106: Wscript.Echo " QueryAndMoveMBTo " & _ 107: chr(34) & "(givenName=J*)" & chr(34) & " " & _ 108: chr(34) & "Mailbox Store B" & chr(34) & " " & _ 109: chr(34) & "First Storage Group" & chr(34) & " " & _ 110: chr(34) & strComputerName & chr(34) & " " & _ 111: chr(34) & "First Administrative Group" & chr(34) 112: WScript.Quit (1) 113: End If 114: 115: strLDAPQueryFilter = objArguments (0) 116: strTargetStore = objArguments (1) 117: strTargetStorageGroup = objArguments (2) 118: strTargetServerName = objArguments (3) 119: strTargetAdministrativeGroup = objArguments (4) 120: strOrganization = GetExchangeOrg () 121: 122: ' ------------------------------------------------------------------- 123: ' Get the default Windows 2000 domain name. 124: Wscript.Echo "Binding to RootDSE to get default Domain Name." 125: Set objRoot = GetObject("LDAP://RootDSE") 126: strDefaultDomainNC = objRoot.Get("DefaultNamingContext") 127: strRootDomainNC = objRoot.Get("RootDomainNamingContext") 128: Set objRoot = Nothing 129: 130: ' Determine whether the target store exists. 131: strTargetHomeMDB_DN = "cn=" & strTargetStore & "," & _ 132: "cn=" & strTargetStorageGroup & "," & _ 133: "cn=InformationStore," & _ 134: "cn=" & strTargetServerName & ",cn=Servers," & _ 135: "cn=" & strTargetAdministrativeGroup & _ 136: ",cn=Administrative Groups," & _ 137: "cn=" & strOrganization & "," & _ 138: "cn=Microsoft Exchange,cn=Services,cn=Configuration," & _ 139: strRootDomainNC 140: 141: Set objTargetHomeMDB = CreateObject("CDOEXM.MailBoxStoreDB") 142: objTargetHomeMDB.DataSource.Open (strTargetHomeMDB_DN) 143: If Err.Number Then ErrorHandler (Err) 144: 145: If objTargetHomeMDB.Status Then 146: WScript.Echo "Target Store '" & objTargetHomeMDB.Name & _ "' is not mounted." 147: WScript.Quit (1) 148: End If 149: 150: ' ------------------------------------------------------------------- 151: ' Search for the list of user mailboxes. 152: Set objResultList = ADSearch ("LDAP://" & strDefaultDomainNC, _ 153: strLDAPQueryFilter, _ 154: "ADsPath, homeMDB", _ 155: "subTree", _ 156: False) 157: WScript.Echo 158: WScript.Echo "Number of matches for the LDAP query is " & _ 159: objResultList.Item ("RecordCount") 160: WScript.Echo 161: 162: ' ------------------------------------------------------------------- 163: objResult = objResultList.Items 164: 165: ' Two elements are returned from the LDAP query: "AdsPath; homeMDB". 166: ' The first element in the list (intIndice=1) contains the number of 167: ' records in the list, so it is skipped. 168: ' Odd elements contain "ADsPath" and even elements contain "homeMDB". 169: 170: For intIndice = 1 to (objResultList.Count - 1) Step 2 171: strUserADsPath = objResult (intIndice) 172: strSourceHomeMDB_DN = objResult (intIndice + 1) 173: 174: Set objUser = GetObject(strUserADsPath) 175: 176: If LCase(strSourceHomeMDB_DN) = _ LCase(strTargetHomeMDB_DN) Then 177: WScript.Echo "Skipping user '" & objUser.DisplayName & _ 178: "'. Actual Store and Target Store are the same." 179: ElseIf _ 180: Isnull (strSourceHomeMDB_DN) Then 181: WScript.Echo "Skipping user '" & objUser.DisplayName & _ 182: "'. This user does not have a mailbox." 183: Else 184: Set objSourceHomeMDB = _ CreateObject("CDOEXM.MailBoxStoreDB") 185: objSourceHomeMDB.DataSource.Open (strSourceHomeMDB_DN) 186: If Err.Number Then ErrorHandler (Err) 187: 188: If objSourceHomeMDB.Status Then 189: WScript.Echo "Source Store '" & _ objSourceHomeMDB.Name & _ 190: "' is not mounted." 191: WScript.Quit (1) 192: End If 193: 194: WScript.Echo "Moving user '" & objUser.DisplayName & _ "' from:" 195: Wscript.Echo " Store '" & objSourceHomeMDB.Name & _ 196: "' to Store '" & objTargetHomeMDB.Name & "'" 197: 198: ' Moving the mailbox. 199: objUser.MoveMailbox "LDAP://" & strTargetHomeMDB_DN 200: If Err.Number Then ErrorHandler (Err) 201: 202: WScript.DisconnectObject objSourceHomeMDB 203: Set objSourceHomeMDB = Nothing 204: End If 205: 206: Set objUser = Nothing 207: Next ...: 212: </script> 213:</job> Line 141 instantiates an object to get information about the target store specified in the command line. The object verifies whether the given store exists (lines 142 and 143) and if it is mounted (line 145). Then the LDAP query is executed (lines 152 to 156). The query retrieves two important pieces of information: The ADsPath of the user object to bind to (line 174). This allows the aggregated IMailboxStore interface to move the mailbox (line 199). Less importantly, it displays the mailbox name during the move operation (lines 177, 181, and 194). The homeMDB property. This property verifies that the user's current mailbox store is not the same as the target store (line 176), and that the user is mailbox-enabled. If not, the homeMDB property is not set in the directory (line 180). The homeMDB property also verifies that the mailbox store is mounted (line 188). After all the checks are completed successfully, the mailbox is moved (line 199). Note You can also verify that the user is mailbox-enabled with the LDAP query. To do so you must combine the user query from the command line with the supplemental condition (homeMDB=*). This requires some string manipulations in the script. The test is made here and not in the LDAP query for simplicity only. The script then processes the next user in the list, if any (line 207). Until line 170, Sample 26 uses exactly the same code as Sample 25. Line 174 uses a CDO.Person object instead of an ADSI.User object. Line 175 uses the GetInterface method to aggregate the IMailboxStore interface to the CDO.Person object. Except for these changes and their associated variable declarations, both scripts use exactly the same logic. Sample 26 displays the adaptation made from line 170 to use the CDO.Person object. Sample 26 Moving mailboxes using a CDO.Person object based on an LDAP query ...: ...: 170: For intIndice = 1 to (objResultList.Count - 1) Step 2 171: strUserADsPath = objResult (intIndice) 172: strSourceHomeMDB_DN = objResult (intIndice + 1) 173: 174: Set objPerson = CreateObject ("CDO.Person") 175: Set objMailbox = objPerson.GetInterface ("IMailboxStore ") 176: 177: objPerson.DataSource.Open (strUserADsPath) 178: 179: If LCase(strSourceHomeMDB_DN) = LCase(strTargetHomeMDB_DN) Then 180: WScript.Echo "Skipping user '" & _ objPerson.Fields("displayName") & _ 181: "'. Actual Store and Target Store are the same." 182: ElseIf _ 183: Isnull (strSourceHomeMDB_DN) Then 184: WScript.Echo "Skipping user '" & _ objPerson.Fields("displayName") & _ 185: "'. This user does not have a mailbox." 186: Else 187: Set objSourceHomeMDB = CreateObject("CDOEXM.MailBoxStoreDB") 188: objSourceHomeMDB.DataSource.Open (strSourceHomeMDB_DN) 189: If Err.Number Then ErrorHandler (Err) 190: 191: If objSourceHomeMDB.Status Then 192: WScript.Echo "Source Store '" & objSourceHomeMDB.Name & _ 193: "' is not mounted." 194: WScript.Quit (1) 195: End If 196: 197: WScript.Echo "Moving user '" & _ objPerson.Fields("displayName") & "'from:" 198: Wscript.Echo " Store '" & objSourceHomeMDB.Name & _ 199: "' to Store '" & objTargetHomeMDB.Name & "'" 200: 201: ' Moving the mailbox. 202: objMailbox.MoveMailbox "LDAP://" & strTargetHomeMDB_DN 203: If Err.Number Then ErrorHandler (Err) 204: 205: objPerson.DataSource.Save 206: 207: WScript.DisconnectObject objSourceHomeMDB 208: Set objSourceHomeMDB = Nothing 209: End If 210: 211: Set objMailBox = Nothing 212: 213: WScript.DisconnectObject objPerson 214: Set objPerson = Nothing 215: Next 216: 217: WScript.DisconnectObject objTargetHomeMDB 218: Set objTargetHomeMDB = Nothing 219: 220: </script> 221:</job> The following are two command line samples to move mailboxes. First, to move all the users with a first name starting with "J" to a mailbox store named "Mailbox Store B" in "First Storage Group", located on "MyServer" in "First Administration Group", use the following: QueryAndMoveMBto (ADSI).vbs" "(givenName=J*)" "Mailbox Store B" "First Storage Group" "MyServer" "First Administrative Group" Note Each parameter has been placed on a different line for clarity. From the command prompt, these parameters must be typed on the same line, separated by a space. Second, if you want to move all the users from a specific mailbox store to another location, you must use a more complex LDAP query. This query is based on the distinguished name of the mailbox store housing the users. The SystemMailbox should not be moved. This is why there is a "(!(cn=Sys*))" statement in the query. Other parameters are the same as the prior sample. Note At first glance, you might think the query can be made with the objectCategory property. However, SystemMailbox is disabled as an objectCategory user in Active Directory. So a query based on this category does not exclude the SystemMailbox from the list of mailboxes. Attempts to move a list of mailboxes that contains the SystemMailbox will not be performed properly. In this sample, only the LDAP query is different: "QueryAndMoveMBto (ADSI).vbs" "(& (!(cn=sys*)) (objectCategory=user) (homeMDB=CN=Mailbox Store A, CN=First Storage Group, CN=InformationStore, CN=MyOtherExchangeServer, CN=Servers, CN=First Administrative Group, CN=Administrative Groups, CN=First Organization, CN=Microsoft Exchange, CN=Services, CN=Configuration, DC=Example, DC=com) )" "Mailbox Store B" "First Storage Group" "MyServer" "First Administrative Group" Monitoring Exchange 2000 Server Activity The server activity monitor for Microsoft® Exchange 2000 Server is based on the WMI infrastructure (see "An Overview of Exchange 2000 WMI Providers" in the Appendix for more information about WMI and Exchange 2000). The monitoring feature included in the Exchange System Manager provides only the status of the component monitored. The main reason for this is that the routing table carries the information. The routing table does not have enough space to hold all of the information provided by WMI. It only carries the status. The second advanced script, E2KWatch, retrieves the complete set of information available from WMI and sends it by e-mail (see Figure 8). The script makes extensive use of the WMI infrastructure of Windows 2000 and Exchange 2000. The monitoring is based on WMI asynchronous events. Sample 27 is a basic example of a WMI asynchronous event monitoring a Windows 2000 service modification. Sample 27 A basic WMI asynchronous event handler for the Win32_Service class 1:' VBScript script creating an asynchronous event notification and 2:' looping. When a change event occurs on a service an event is triggered. .: 9:Option Explicit ..: ..: 14:Set objSink = WScript.CreateObject ("WbemScripting.SWbemSink","SINK_") 15: 16:Set objService = _ GetObject("WinMgmts:{impersonationLevel=impersonate, (security)}") 17: 18:objService.ExecNotificationQueryAsync objSink, _ 19: "Select * FROM __InstanceModificationEvent WITHIN 1 Where " & _ 20: "TargetInstance isa 'Win32_Service'" 21: 22:WScript.Echo "Waiting for events..." 23: 24:Do 25: 26: WScript.Sleep (5000) 27: 28:Loop 29: 30:objSink.Cancel 31: ..: 37:' ------------------------------------------------------------------------ 38:Sub SINK_OnObjectReady (objWbemObject, objWbemAsyncContext) 39: 40: WScript.Echo FormatDateTime(Date, vbLongDate) & " at " & _ 41: FormatDateTime(Time, vbShortTime) & ": " & _ 42: objWbemObject.TargetInstance.DisplayName & " " & _ 43: objWbemObject.TargetInstance.State & _ 44: " (" & objWbemObject.TargetInstance.Name & "). " & _ 45: "Startup mode is '" & objWbemObject.TargetInstance.StartMode & "'." 46: 47:End Sub Every change made to a Windows 2000 service (startup mode, logon credentials, stop, start, and so on) generates a WMI event to the SINK_OnObjectReady function. The two parameters provided by this function are objects initialized by the WMI infrastructure: bjWbemObject: This object represents the component (Windows 2000 Services, logical disks, CPU, event log, and so on) related to the event. bjWbemAsyncContext: This object helps determine the context of the WMI event. You can specify a context (which is a string or a value parameter) during the initialization of the WMI asynchronous event. This parameter is then passed back during the event through objWbemAsyncContext. The E2KWatch script uses exactly the same structure and the same logic as Sample 27. This sample can be divided into two important sections: WMI initialization (lines 14 to 20) WMI event handling (lines 38 to 47) The E2KWatch script applies this logic to many different WMI classes. Note The next sections require a strong understanding of WMI. For more information about WMI, see the WMI SDK at. The script can monitor many aspects of an Exchange 2000 server. These aspects can be classified in two categories: Windows 2000 and its associated providers Exchange 2000 and its associated providers Monitoring Windows 2000 With the help of the Windows 2000 standard WMI classes, it is possible to monitor the following: Monitoring Exchange 2000 The E2KWatch.wsf Script The E2KWatch.wsf script contains more than 2,000 lines of code. This script reproduces the logic in Sample 27. Because of the number of lines, it is not possible to publish and comment the complete script listing. The following sections concentrate on the key parts of the script. The script uses the WMI infrastructure and many Windows 2000 and Exchange 2000 WMI classes. The Configuration File Because it has such a large number of parameters, the E2KWatch script uses a configuration file. This file holds all the parameters related to the monitoring of a particular Exchange 2000 Server. The script must run locally on the Exchange server. The configuration file corresponds to the server DC01-CPQ. The ESM window of this server is shown in Figure 5. To get an idea of the server configuration, see the output file in Figure 6. The E2KWatch script includes the ReadConfigurationFile function (this document does not enter into the details of this function). Figure 7 shows a sample of the configuration file for the server DC01-CPQ. Figure 7 The E2KWatch configuration file 1:# E2KWatch configuration file. .: 7: 8:# The Internet e-mail address to send the mail alerts to. 9:# (By default, use the currently-logged-on user with the current domain.) 10: MAILALERTTO=%USERNAME%@%USERDNSDOMAIN% 11: 12:# The Internet e-mail address to send the mail alerts to. 13:# (By default, use the currently-logged-on user with the current domain.) 14:# MAILERRORTO=%USERNAME%@%USERDNSDOMAIN% 15: 16:# Microsoft Exchange System Attendant 17: Service=MSExchangeSA 18:# Manages Microsoft Exchange Information Storage 19: Service=MSExchangeIS 20:# Microsoft Exchange POP3 21: Service=POP3Svc 22:# Microsoft Exchange Routing Engine 23: Service=RESvc 24:# Microsoft Exchange IMAP4 25: Service=IMAP4Svc 26:# Microsoft Exchange MTA Stacks 27: Service=MSExchangeMTA 28:# Microsoft Exchange Event 29: Service=MSExchangeES 30:# Microsoft Exchange Site Replication Service 31: Service=MSExchangeSRS 32:# Simple Mail Transport Protocol (SMTP) 33: Service=SMTPSVC 34:# Microsoft Search 35: Service=MSSEARCH 36:# SNMP Service 37: Service=SNMP 38: 39:# Processor #1 40: Processor=CPU0; 80 41:# Processor #1 42: Processor=CPU1; 80 43:# Processor #2 44: Processor=CPU2; 80 45:# Processor #3 46: Processor=CPU3; 80 47: 48:# Logical Disk 49: LogicalDisk=C:; 50 50:# Logical Disk 51: LogicalDisk=D:; 50 52:# Logical Disk 53: LogicalDisk=E:; 50 54: 55:# Information Store maximum sizes 56: MDB 57: STM ..: ..: 83: MDB 84: STM 85: 86:# EventLog to capture 87:# EventLog= 88:# EventLog= MSExchangeSA;9175; MAPI Session; *; * 89:# EventLog= MSExchangeSA;9175; MAPI Session;Application; error 90: EventLog= MSExchangeSA; *; *; *; * 91: EventLog= MSExchangeMU; *; *; *; * ..: 94: EventLog= MSExchangeIS; *; *; *; * 95: EventLog= MSExchangeDSAccess; *; *; *; * 96: EventLog= MSExchangeFBPublic; *; *; *; * 97: 98:# Exchange System Attendant process 99: Process=MAD; 50 100:# Exchange Information Store process 101: Process=STORE; 50 102:# Exchange Message Transfer Agent process 103: Process=EMSMTA; 50 104:# Any other process using more than 80% of the CPU time 105:# Process=*; 80 106: 107:# Watch the ExchangeRoutingTable WMI Provider. 108: WMIEXCHANGESERVER=%COMPUTERNAME% 109: 110:# Watch the ExchangeRoutingTable WMI Provider. 111: WMIEXCHANGECONNECTOR=To First Routing Group 112: WMIEXCHANGECONNECTOR=To Second Routing Group 113: 114:# Watch the ExchangeQueue WMI Provider. 115: WMIEXCHANGELINK=*; 30 116: 117:# Watch the ExchangeQueue WMI Provider. 118: WMIEXCHANGEQUEUE=*; 30 119: 120:# Watch the ExchangeCluster WMI Provider. 121:# WMIEXCHANGECLUSTER=VirtualServerName This configuration contains the following parameters: Lines 16 to 37 list the Windows 2000 services to be monitored. Note that the service names given are the registry key names of the service, not the display names. Lines 39 to 40 contain the maximum CPU usage for each process in the system. Lines 48 to 53 contain the minimum free disk space percentage. Lines 56 to 84 contain the distinguished name of the store to be monitored. The MDBSTORESIZE keyword refers to the .edb file and the STMSTORESIZE keyword refers to the .stm file. If the store file (.edb or .stm) reaches the size given in the second parameter (after the semi-colon), WMI will trigger an event. If the size is equal to or higher than the third parameter, the script will take action with CDOEXM to dismount the store. Lines 87 to 96 contain the EventLog for which WMI must trigger an alert. If a line has a wildcard, it means "any", and the corresponding value is not taken into consideration to filter the event. Lines 98 to 105 contain the list of processes to be monitored. If the CPU usage is equal to or higher than the specified threshold for a given process, WMI will trigger an event. Lines 107 to 121 reference the Exchange 2000 WMI providers: Line 108 specifies the Exchange Server name for the ExchangeServerState instance that must change to trigger a WMI event. Lines 111 and 112 specify the Exchange connector name for the ExchangeConnectorState instance that must change to trigger a WMI event. Lines 114 to 118 specify the IncreasingTime threshold that will trigger a WMI event (see "The Exchange 2000 WMI Queue Provider" in the Appendix). Note the wildcard to attach this statement to any link or queue in the system. A link or queue name can also be specified. The WMI Initialization Like Sample 27 (lines 14 to 20), E2KWatch.wsf must register the WMI class events based on the parameters in the configuration file. The next sections are a quick review of the WMI asynchronous event initialization for each class used in the E2KWatch.wsf script. The WMI Win32_Process Class The list of services to be monitored is stored in an array initialized by the ReadConfigurationFile function. The script checks the array size (Sample 28, line 360). If it is greater than zero, the array contains at least one service to monitor. All the initialization procedures work the same way. Lines 361 and 362 create an object to associate the event handler routine label (Win32_ServiceSINK_) to the desired class (Win32_Service). Lines 363 to 366 execute the WMI registration. There are four things to note in these lines: The data selection statement, "Select *" (line 364). In this case, the script retrieves all the data available from the WMI class. The __InstanceModificationEvent statement, which tells WMI to send a notification whenever a class is modified (line 364). The "WITHIN" statement (line 364) with the cWMI_Within_Win32_Service constant (line 365). This statement specifies the polling interval at which the script requires change notifications for a class. This "WITHIN" statement is mandatory because the Win32_Service WMI class does not have an event provider. The cWMI_Within_Win32_Service constant is defined at the beginning of the script. The condition statement, "TargetInstance isa 'Win32_Service'" (line 366), which specifies the class to be monitored. Most of the initialization process samples use the same type of statements. The only exception is the Win32_NTLogEvent. See Sample 31 for more information. The major coding difference between the samples is the condition statement. In Sample 28 (line 366), the statement only concerns the class itself. In this case, any change to an instance of the specified class (Win32_Service) triggers an event. Sample 28 The WMI Win32_Service class asynchronous event registration ...: ...: 359:' Watch the Win32_Service. 360:If UBound (strServiceName) Then 361: Set objWin32_ServiceSink = _ WScript.CreateObject("WbemScripting.SWbemSink", "Win32_ServiceSINK_") 362: 363: objWMIService.ExecNotificationQueryAsync objWin32_ServiceSink, _ 364: "Select * FROM __InstanceModificationEvent WITHIN " & _ 365: cWMI_Within_Win32_Service & " Where " & _ 366: "TargetInstance isa 'Win32_Service'" ...: 370:End If ...: ...: Although it is possible to specify the service status in the condition statement, E2KWatch does so in the event handler routine. This is only to illustrate that any change to a service triggers an event, even if the service is not mentioned in the configuration file. The event handler, based on the parameters specified in the configuration file, filters the event to determine what actions to take. The same concept applies to other event registrations and handlers. This means that the filtering is determined at the script level. For example, if you want WMI to filter the event, line 366 must be changed to: ...: ...: 363: objWMIService.ExecNotificationQueryAsync objWin32_ServiceSink, _ 364: "Select * FROM __InstanceModificationEvent WITHIN " & _ 365: cWMI_Within_Win32_Service & " Where " & _ 366: "TargetInstance isa 'Win32_Service' And TargetInstance.State='Stopped'" ...: ...: In this case, a WMI event will be triggered only if a service state is modified to a "stopped" state. The filtering is completed by WMI and not by the script. Many other classes can be used, such as the service name, the start mode, and so on. For more information about available classes and their associated properties, see the WMI SDK at. The WMI Win32_Processor Class The Win32_Processor class uses the exact same piece of code as the Win32_Service class. All the remarks made for Win32_Service are valid. Only the properties retrieved are different for each class. See the WMI SDK for more information. Sample 29 The WMI Win32_Processor class asynchronous event registration ...: ...: 372:' Watch the Win32_Processor. 373:If UBound (strCPUDeviceID) Then 374: Set objWin32_ProcessorSink = _ WScript.CreateObject ("WbemScripting.SWbemSink","Win32_ProcessorSINK_") 375: 376: objWMIService.ExecNotificationQueryAsync objWin32_ProcessorSink, _ 377: "Select * FROM __InstanceModificationEvent WITHIN " & _ 378: cWMI_Within_Win32_Processor & " Where " & _ 379: "TargetInstance isa 'Win32_Processor'" ...: 383:End If ...: ...: The WMI Win32_LogicalDisk Class The Win32_LogicalDisk class uses the same piece of code as the Win32_Service class and the Win32_Processor class (see Sample 28), and the same remarks apply. Note the extension of the condition statement ("TargetInstance.DriveType=3") to receive WMI events only for hard disks (line 392). Sample 30 The WMI Win32_LogicalDisk class asynchronous event registration ...: ...: 385:' Watch the Win32_LogicalDisk. 386:If UBound (strLogicalDiskName) Then 387: Set objWin32_LogicalDiskSink = _ WScript.CreateObject("WbemScripting.SWbemSink",Win32_LogicalDiskSINK_") 388: 389: objWMIService.ExecNotificationQueryAsync objWin32_LogicalDiskSink, _ 390: "Select * FROM __InstanceModificationEvent WITHIN " & _ 391: cWMI_Within_Win32_LogicalDisk & " Where " & _ 392: "TargetInstance isa 'Win32_LogicalDisk' and TargetInstance.DriveType=3" ...: 396:End If ...: ...: The WMI Win32_NTLogEvent Class The Win32_NTLogEvent is a particular case and requires the following two changes: The __InstanceCreationEvent statement must be used instead of the __InstanceModificationEvent statement (line 403). The __InstanceCreationEvent statement tells WMI to send a notification each time a creation event occurs in the specified class. This corresponds to a new event being logged in the Windows 2000 application event log. The "WITHIN" statement is not used, because the Win32_NTLogEvent WMI class has an event provider. Except for these two changes, the logic is the same as before. Sample 31 The WMI Win32_NTLogEvent class asynchronous event registration ...: ...: 398:' Watch the Win32_NTLogEvent. 399:If UBound (strEventLogSourceName) Then 400: Set objWin32_NTLogEventSink = _ WScript.CreateObject("WbemScripting.SWbemSink","Win32_NTLogEventSINK_") 401: 402: objWMIService.ExecNotificationQueryAsync objWin32_NTLogEventSink, _ 403: "Select * FROM __InstanceCreationEvent Where " & _ 404: "TargetInstance isa 'Win32_NTLogEvent'" ...: 408:End If ...: ...: The WMI CIM_DATAFile Class This WMI initialization is more complex than the previous one, but again the logic is mostly the same. The difference is in the condition statement. The CIM_DATAFile class monitors the size of the .edb and .stm store files. The script programs a WMI event for each store file defined in the configuration file (see Figure 7, lines 56 to 84). The condition statement for this class specifies the following: The file name for which the WMI event must occur (see Sample 32, lines 419 and 420): TargetInstance.Name='" & ReplaceBy(strDBPath (intIndice), "\", "\\") This line also calls the ReplaceBy function, which replaces any single backslash in the store file path with a double backslash. This is required for the file path syntax. That the event must occur only if the file size is greater than the size specified in the configuration file (lines 421 and 422): "TargetInstance.FileSize > " & 1024 * CLng(intStoreAlertSize (intIndice)) The file size limit is expressed in megabytes in the configuration file. This explains the "* 1024" statement. That the event must occur only if the file grows. This is the reason for the statement in line 423: TargetInstance.FileSize > PreviousInstance.FileSize PreviousInstance retrieves the characteristics of the object before the modification. With this statement, the condition will be true if the current file size is greater than the prior file size. Sample 32 The WMI CIM_DATAFile class asynchronous event registration ...: ...: 410:' Watch the CIM_DATAFile. 411:If UBound (strDNStoreDB) Then 412: Set objCIM_DATAFileSink = WScript.CreateObject _ 413: ("WbemScripting.SWbemSink", _"E2K_StoreDBSink_") 414: ' Register an asynchronous event for the .edb file. 415: For intIndice = 1 to UBound (strDNStoreDB) 416: objWMIService.ExecNotificationQueryAsync objCIM_DATAFileSink, _ 417: "SELECT * From __InstanceModificationEvent Within " & _ 418: cWMI_Within_CIM_DataFile & " Where " & _ 419: "TargetInstance isa 'CIM_DATAFile' And TargetInstance.Name='" & _ 420: ReplaceBy(strDBPath (intIndice), "\", "\\") & "' And " & _ 421: "TargetInstance.FileSize > " & _ 422: 1024 * CLng(intStoreAlertSize (intIndice)) & _ 423: " And TargetInstance.FileSize > PreviousInstance.FileSize" ...: 429: Next 430:End If ...: ...: The WMI NTProcess Class Unlike standard WMI classes, the NTProcess class is created with the help of a Managed Object Format (.mof) file. See the WMI SDK for more information about .mof file usage. Except for the creation and compilation of the .mof file, the logic is the same as before. Because WMI must trigger an event when a process uses more than a certain amount of CPU time, the registration excludes the Idle and _Total processes from the selection. One represents the total free CPU time. The other represents the total CPU time used by all the processes. Sample 33 The WMI NTProcess class asynchronous event registration ...: ...: 447:If UBound (strProcessName) Then 448: Set objNTProcessSink = WScript.CreateObject("WbemScripting.SWbemSink",_ 449: "NTProcessSINK_") 450: objWMINTProcessService.ExecNotificationQueryAsync objNTProcessSink, _ 451: "Select * FROM __InstanceModificationEvent WITHIN " & _ 452: cWMI_Within_NTProcess & " Where " & _ 453: "TargetInstance isa 'NTProcess' and " & _ 454: "TargetInstance.Process != 'Idle' and " & _ 455: "TargetInstance.Process != '_Total'" 456: intRC = WriteToFile (objLogFileName, _ 457: "(WMIEventRegistration) 'NTProcess' " & _ "asynchronous events registered.") 458:End If ...: ...: The WMI ExchangeServerState Class This event registration uses the same logic as the Win32_Service class event registration. No change is made to the logic except the class itself. Any modification of this class will trigger an event. For more information about the ExchangeServerState class, including property descriptions, see the Exchange SDK at. Sample 34 The WMI ExchangeServerState class asynchronous event registration ...: ...: 478:If UBound (strWMIExchangeServerName) Then 479: ' Watch the ExchangeServerState. 480: Set objE2K_ServerStateSink = _ WScript.CreateObject("WbemScripting.SWbemSink","E2K_ServerStateSink_") 481: 482: objWMIExchangeService.ExecNotificationQueryAsync _ objE2K_ServerStateSink, _ 483: "Select * FROM __InstanceModificationEvent WITHIN " & _ 484: cWMI_Within_E2K_ServerState & " Where " & _ 485: "TargetInstance isa 'ExchangeServerState'" ...: 488:End If ...: ...: The WMI ExchangeConnectorState Class This event registration uses the same logic as the ExchangeServerState class event registration. No change is made to the logic except the class itself. Any modification of this class will trigger an event. For more information about the ExchangeConnectorState class, including property descriptions, see the Exchange SDK at. Sample 35 The WMI ExchangeConnectorState class asynchronous event registration ...: ...: 490:If UBound (strWMIExchangeConnector) Then 491: ' Watch the ExchangeConnectorState. 492: Set objE2K_ConnectorStateSink = _ WScript.CreateObject("WbemScripting.SWbemSink",_ "E2K_ConnectorStateSink_") 493: 494: objWMIExchangeService.ExecNotificationQueryAsync _ objE2K_ConnectorStateSink, _ 495: "Select * FROM __InstanceModificationEvent WITHIN " & _ 496: cWMI_Within_E2K_Connector & " Where " & _ 497: "TargetInstance isa 'ExchangeConnectorState'" ...: 500:End If ...: ...: The WMI ExchangeLink Class This event registration uses the same logic as the ExchangeServerState class event registration, except for the condition statement, which includes the following: TargetInstance.IncreasingTime > PreviousInstance.IncreasingTime Like the CIM_DATAFile class, the ExchangeLink class requires that WMI trigger an event only if the increasingTime value increases. For more information about the ExchangeLink class, including property descriptions, see the Exchange SDK at. For more information about the IncreasingTime value, see the Appendix. Sample 36 The WMI ExchangeLink class asynchronous event registration ...: ...: 502:If UBound (strWMIExchangeLink) Then 503: ' Watch the ExchangeLink. 504: Set objE2K_LinkSink = WScript.CreateObject _ ("WbemScripting.SWbemSink", _ 505: "E2K_LinkSink_") 506: objWMIExchangeService.ExecNotificationQueryAsync objE2K_LinkSink, _ 507: "Select * FROM __InstanceModificationEvent WITHIN " & _ 508: cWMI_Within_E2K_Link & " Where " & _ 509: "TargetInstance isa 'ExchangeLink' And " & _ 510: "TargetInstance.IncreasingTime > PreviousInstance.IncreasingTime" ...: 513:End If ...: ...: The WMI ExchangeQueue Class This event registration uses the same logic as the ExchangeLink class event registration. Refer to the preceding paragraph for more information. Sample 37 The WMI ExchangeQueue class asynchronous event registration ...: ...: 515:If UBound (strWMIExchangeQueue) Then 516: ' Watch the ExchangeQueue. 517: Set objE2K_QueueSink = WScript.CreateObject _ ("WbemScripting.SWbemSink", _ 518: "E2K_QueueSink_") 519: objWMIExchangeService.ExecNotificationQueryAsync objE2K_QueueSink, _ 520: "Select * FROM __InstanceModificationEvent WITHIN " & _ 521: cWMI_Within_E2K_Queue & " Where " & _ 522: "TargetInstance isa 'ExchangeQueue' And " & _ 523: "TargetInstance.IncreasingTime > PreviousInstance.IncreasingTime" ...: 526:End If ...: ...: The WMI ExchangeClusterResource Class This event registration uses the same logic as the ExchangeServerState class event registration. For any change to the ExchangeClusterResource WMI class, WMI will trigger an event. For more information about the ExchangeClusterResource class, including property descriptions, see the Exchange SDK at. Sample 38 The WMI ExchangeClusterResource class asynchronous event registration ...: ...: 528:If UBound (strWMIExchangeClusterVName) Then 529: ' Watch the ExchangeClusterResource. 530: Set objE2K_ClusterResourceSink =_ WScript.CreateObject ("WbemScripting.SWbemSink", _ "E2K_ClusterResourceSink_") 531: 532: objWMIExchangeService.ExecNotificationQueryAsync _ objE2K_ClusterResourceSink, _ 533: "Select * FROM __InstanceModificationEvent WITHIN " & _ 534: cWMI_Within_E2K_ClusterResource & " Where " & _ 535: "TargetInstance isa 'ExchangeClusterResource'" 536: intRC = WriteToFile (objLogFileName, "(WMIEventRegistration) " & _ 537: "'ExchangeClusterResource' asynchronous events registered.") 538: End If ...: ...: Event Routines The event routine is the function called by the Windows Management Instrumentation (WMI) infrastructure when the registered asynchronous event occurs. The function has two parts: The first part is defined during the event sink object creation and used by the WMI asynchronous event registration (see "The WMI Initialization"). The second part is defined by the WMI infrastructure and corresponds to the event triggered by WMI. In this case, OnObjectReady corresponds to an object that is available and provided by an asynchronous call. The object is considered available based on the condition statements in the asynchronous event registration. As explained for Sample 27, two parameters can be used in an asynchronous event function: objWbemObject and objWbemAsyncContext. The E2KWatch.wsf script uses objWbemObject to retrieve the properties of the object for the asynchronous event. The WMI Win32_Process Class Event Routine Because the configuration file may contain more than one service to be monitored, the script executes a "For Each" loop (Sample 39, lines 679 to 706) to retrieve the service name from the list that matches the one provided by WMI. Note There are many different ways to program WMI events. In this case, the script filters the event after the event is triggered (inside the event handler). This is because the WMI condition set during the WMI asynchronous event initialization is not restrictive (see Sample 28). You can also program a more restrictive condition during the initialization. For example, you can program WMI to trigger an event only for services listed in the configuration file with specific threshold states. More restrictive approaches offer greater performance, but require the registration of a WMI event for each service to be monitored. This makes the registration process more complex. For educational purposes, the first method is better because it is simpler and allows more events to occur. After a service match is found (line 681), the script processes the event, using the WMI_GetSvcStatus function to determine the state of the service. This function parses a property of the objWbemObject object (representing the service itself), and returns False if the service is stopped. If this is the case, the script calls the function WMI_LoopStartServiceRetry. This function executes a loop to restart the service. Three attempts are made (this is a default value defined in the script). When the script exits from this function, the returned value is True if, during the loop, the service has been restarted. Otherwise the returned value is False. Sample 39 The WMI Win32_Service class asynchronous event handler ...: ...: 663:' ------------------------------------------------------------------------- 664:' Win32_ServiceSink 665:' ------------------------------------------------------------------------- 666:Sub Win32_ServiceSink_OnObjectReady (objWbemObject, objWbemAsyncContext) 667: ...: 677: boolSvcFound = False 678: 679: For intIndice = 1 To Ubound(strServiceName) 680: 681: If Ucase(strServiceName(intIndice)) = _ Ucase(objWbemObject.TargetInstance.Name) Then 682: boolSvcFound = True 683: 684: ' Get the status of the service. 685: boolSvcStatus = WMI_GetSvcStatus (objLogFileName, _ objWbemObject.TargetInstance) 686: 687: ' If the service is running, WMI_GetSvcStatus returns True into ' boolSvcStatus. 688: If boolSvcStatus Then 689: If strServiceRetryCounter(intIndice) > 0 Then 694: Win32_ServiceInfo objLogFileName, _ 695: objWbemObject.TargetInstance, _ 696: objWbemObject.PreviousInstance 697: End If 698: strServiceRetryCounter(intIndice) = 0 699: Else 700: boolSvcStatus = WMI_LoopStartServiceRetry (objLogFileName, _ 701: intIndice, _ 702: objWbemObject.TargetInstance, _ 703: boolSvcStatus) 704: End If 705: End If 706: Next 707: 708: If boolSvcFound = False Then ...: 713: intRC = WriteToFile(objLogFileName, _ 714: "(Win32_ServiceSink_OnObjectReady) No action taken, " & _ 715: "service is not on the list.") 716: Else 717: ' Check that the boolean returned from WMI_GetSvcStatus is True, ' indicating that the service is running. 718: If Not boolSvcStatus Then 719: Win32_ServiceInfo objLogFileName, _ 720: objWbemObject.TargetInstance, _ 721: objWbemObject.PreviousInstance 722: End If 723: End If ...: 727: intRC = WriteToFile (objLogFileName, _ 728: "-------------------------------------------------------------------") 729: 730:End Sub ...: ...: If the service is not running, the information related to its current status is retrieved (line 719). All the functions in the form ClassNameInfo, such as the function Win32_ServiceInfo, do the same things: Retrieve the current information related to the class Format the data in HTML Send a message to the address specified in the configuration file (the MAILALERTTO parameter at line 10 of Figure 7) with the SendMessage function The event then terminates. The WMI Win32_Processor Class Event Routine The Win32_Processor event handler uses the same logic as the Win32_Service event handler to retrieve processor names that match those specified in the configuration file. The difference is the action taken. If the processor retrieved (Sample 40, lines 745 and 746) exceeds its associated maximum (lines 747 and 748), the function Win32_ProcessorInfo is invoked to format the related data (lines 749 to 751) and send a message. Sample 40 The WMI Win32_Processor class asynchronous event handler ...: ...: 732:' ----------------------------------------------------------------------- 733:' Win32_ProcessorSink 734:' ----------------------------------------------------------------------- 735:Sub Win32_ProcessorSink_OnObjectReady (objWbemObject, _ objWbemAsyncContext) ...: 741: intRC = WriteToFile (objLogFileName, _ 742: "(Win32_ProcessorSink_OnObjectReady) Start Sink.") 743: 744: For intIndice = 1 To Ubound(strCPUDeviceID) 745: If Ucase(strCPUDeviceID(intIndice)) = _ 746: Ucase(objWbemObject.TargetInstance.DeviceID) And _ 747: Clng(intCPULoadPercentageMax(intIndice)) <= _ 748: Clng(objWbemObject.TargetInstance.LoadPercentage) Then 749: Win32_ProcessorInfo objLogFilename, _ 750: objWbemObject.TargetInstance, _ 751: objWbemObject.PreviousInstance 752: Exit For 753: End If 754: Next 755: 756: intRC = WriteToFile (objLogFileName, _ 757: "(Win32_ProcessorSink_OnObjectReady) End Sink.") 758: intRC = WriteToFile (objLogFileName, _ 759: "--------------------------------------------------------------") 760: 761:End Sub ...: ...: The WMI Win32_LogicalDisk Class Event Routine The Win32_LogicalDisk event handler uses the same logic as the Win32_Service event handler to retrieve matches based on the content of the configuration file. The Win32_LogicalDisk event handler is different in two ways: First, the percentage of free disk space is calculated before the logical disk name is filtered because the WMI class does not provide this value (Sample 41, lines 776 and 777). Second, during the loop (lines 779 to 789), if the logical disk name exceeds the maximum set in lines 747 and 748, the function Win32_LogicalDiskInfo is invoked to format the related data (lines 749 to 751) and send a message. Note that WMI does not make any data calculations or correlations. The event handler performs these tasks (lines 776 and 777). Sample 41 The WMI Win32_LogicalDisk class asynchronous event handler ...: ...: 763:' ------------------------------------------------------------------------ 764:' Win32_LogicalDiskSink 765:' ------------------------------------------------------------------------ 766:Sub Win32_LogicalDiskSink_OnObjectReady (objWbemObject, _ objWbemAsyncContext) ...: 773: intRC = WriteToFile (objLogFileName, _ 774: "(Win32_LogicalDiskSink_OnObjectReady) Start Sink.") 775: 776: intPercentageFree = 100 * _ 777: (objWbemObject.TargetInstance.FreeSpace / _ objWbemObject.TargetInstance.Size) 778: 779: For intIndice = 1 To Ubound(strLogicalDiskName) 780: If Ucase(strLogicalDiskName(intIndice)) = _ 781: Ucase(objWbemObject.TargetInstance.Name) And _ 782: CLng(intLogicalDiskFreeSpaceMin(intIndice)) >= _ 783: Clng(intPercentageFree) Then 784: Win32_LogicalDiskInfo objLogFileName, _ 785: objWbemObject.TargetInstance, _ 786: objWbemObject.PreviousInstance 787: Exit For 788: End If 789: Next 790: 791: intRC = WriteToFile (objLogFileName, _ 792: "(Win32_LogicalDiskSink_OnObjectReady) End Sink.") 793: intRC = WriteToFile (objLogFileName, _ 794: "------------------------------------------------------------------") 795: 796:End Sub ...: ...: The WMI Win32_NTLogEvent Class Event Routine The Win32_NTLogEvent event handler uses the same logic as the other event handlers, but with more sophisticated code for event filtering. The configuration file (Figure 7, lines 90 to 96) allows a wildcard, which lets you determine whether an event is significant to the event handler. The filtering process can be summarized as a pure string manipulation (Sample 42, lines 819 to 848). After the string comparisons and substitutions are made, the validation tests (lines 850 to 861) are executed to determine whether an alert must be sent by way of the Win32_NTLogEventInfo function (line 860). Sample 42 The WMI Win32_NTLogEvent class asynchronous event handler ...: ...: 798:' ------------------------------------------------------------------------ 799:' Win32_NTLogEventSink 800:' ------------------------------------------------------------------------ 801:Sub Win32_NTLogEventSink_OnObjectReady (objWbemObject, objWbemAsyncContext) ...: 814: intRC = WriteToFile (objLogFileName, _ 815: "(Win32_NTLogEventSink_OnObjectReady) Start Sink.") 816: 817: For intIndice = 1 To Ubound(strEventLogSourceName) 818: 819: If strEventLogSourceName(intIndice) = "*" Then 820: strTempEventLogSourceName = _ CheckIfNull(objWbemObject.TargetInstance.SourceName) 821: Else 822: strTempEventLogSourceName = strEventLogSourceName(intIndice) 823: End If 824: 825: If intEventLogEventCode(intIndice) = "*" Then 826: intTempEventLogEventCode = _ CheckIfNull(objWbemObject.TargetInstance.EventCode) 827: Else 828: intTempEventLogEventCode = intEventLogEventCode(intIndice) 829: End If 830: 831: If strEventLogCategoryString(intIndice) = "*" Then 832: strTempEventLogCategoryString = _ CheckIfNull(objWbemObject.TargetInstance.CategoryString) 833: Else 834: ' Add a CRLF because the original objWbemObject always has a CRLF. 835: strTempEventLogCategoryString = _ strEventLogCategoryString(intIndice) & vbCRLF 836: End If 837: 838: If strEventLogLogfile(intIndice) = "*" Then 839: strTempEventLogLogfile = _ CheckIfNull(objWbemObject.TargetInstance.Logfile) 840: Else 841: strTempEventLogLogfile = strEventLogLogfile(intIndice) 842: End If 843: 844: If strEventLogType(intIndice) = "*" Then 845: strTempEventLogType = _ CheckIfNull(objWbemObject.TargetInstance.Type) 846: Else 847: strTempEventLogType = strEventLogType(intIndice) 848: End If 849: 850: If Ucase(strTempEventLogSourceName) = _ 851: Ucase(CheckIfNull(objWbemObject.TargetInstance.SourceName)) And 852: CLng(intTempEventLogEventCode) = _ 853: CLng(CheckIfNull(objWbemObject.TargetInstance.EventCode)) And _ 854: Ucase(strTempEventLogCategoryString) = _ 855: Ucase(CheckIfNull(objWbemObject.TargetInstance.CategoryString))And 856: Ucase(strTempEventLogLogfile) = _ 857: Ucase(CheckIfNull(objWbemObject.TargetInstance.Logfile)) And _ 858: Ucase(strTempEventLogType) = _ 859: Ucase(CheckIfNull(objWbemObject.TargetInstance.Type)) Then 860: Win32_NTLogEventInfo objLogFileName, objWbemObject.TargetInstance 861: Exit For 862: End If 863: Next 864: 865: intRC = WriteToFile (objLogFileName, _ 866: "(Win32_NTLogEventSink_OnObjectReady) End Sink.") 867: intRC = WriteToFile (objLogFileName, _ 868: "---------------------------------------------------------------------") 869: 870:End Sub ...: ...: The WMI CIM_DATAFile Class Event Routine Again, the CIM_DATAFile event handler uses the same logic as other event handlers, but it takes some additional actions. The script monitors the size of the specific files that represent the store. As explained previously (see Figure 8, lines 56 to 84), two thresholds are specified in the configuration file: When the first threshold is reached, an alert is sent by way of the E2K_StoreDBInfo function (Sample 43, lines 965 to 967). When the second threshold is reached (line 888), the DismountStoreDB function automatically dismounts the store (line 954). Before dismounting the store, the script sends alerts with the WMI information available (lines 889 to 942) and waits for a time-out (line 943). Sample 43 The WMI CIM_DATAFile class asynchronous event handler ...: ...: 873:' ------------------------------------------------------------------------- 874:' E2K_StoreDBSink 875:' ------------------------------------------------------------------------- 876:Sub E2K_StoreDBSink_OnObjectReady (objWbemObject, objWbemAsyncContext) ...: 882: intRC = WriteToFile (objLogFileName, _ 883: "(E2K_StoreDBSink_OnObjectReady) Start Sink.") 884: 885: For intIndice = 1 To Ubound(strDNStoreDB) 886: If UCase(objWbemObject.TargetInstance.Name) = _ UCase(strDBPath (intIndice)) Then 887: 888: If intStoreDismountSize (intIndice) < _ objWbemObject.TargetInstance.FileSize Then 889: strHTML = cStartREDHTML & "WARNING:" & _ 890: cBacktoNormalHTML & "<BR>Dismounting store '" & _ 891: strStoreName(intIndice) & _ 892: "' (" & strStoreDBClass(intIndice) & ") in " & _ 893: cWaitBeforeDBDismount / 1000 & "s. on " & _ 894: strComputerName & " (E2K_StoreDBInfo)" 895: 896: strHTML = cStartHTMLTableTitle & strHTML & cEndHTMLTableTitle 897: strHTML = strHTML & cStartHTMLTableData 898: strHTML = strHTML & FormatHTML ("", _ 899: "<B>Current State</B>", _ 900: "<B>Previous State</B>") 901: strHTML = strHTML & FormatHTML ("Name: ", _ 902: objWbemObject.TargetInstance.Name, _ 903: objWbemObject.PreviousInstance.Name) 904: strHTML = strHTML & _ 905: FormatHTML ("Filesize: ", _ 906: objWbemObject.TargetInstance.FileSize / 1024, _ 907: objWbemObject.PreviousInstance.FileSize / 1024) 908: strHTML = strHTML & FormatHTML ("Encrypted: ", _ 909: objWbemObject.TargetInstance.Encrypted, _ 910: objWbemObject.PreviousInstance.Encrypted) 911: strHTML = strHTML & FormatHTML ("FileType: ", _ 912: objWbemObject.TargetInstance.FileType, _ 913: objWbemObject.PreviousInstance.FileType) 914: strHTML = strHTML & FormatHTML ("LastAccessed: ", _ 915: objWbemObject.TargetInstance.LastAccessed, _ 916: objWbemObject.PreviousInstance.LastAccessed) 917: strHTML = strHTML & FormatHTML ("LastModified: ", _ 918: objWbemObject.TargetInstance.LastModified, _ 919: objWbemObject.PreviousInstance.LastModified) 920: strHTML = strHTML & cEndHTMLTableData 921: 922: AlertHandler objLogFileName, _ 923: "(E2K_StoreDBSink)", _ 924: "Informationstore '" & strStoreName(intIndice) & _ 925: "' (Size=" & objWbemObject.TargetInstance.FileSize / 1024 & 926: " Kbytes) will be dismounted in " & _ 927: cWaitBeforeDBDismount / 1000 & ".", _ 928: strHTML, _ 929: cMailAlert, _ 930: cEventLogAlert, _ 931: cCommandAlert, _ 932: cPopupAlert 933: 934: AlertHandler objLogFileName, _ 935: "(E2K_StoreDBSink)", _ 936: "Pausing " & cWaitBeforeDBDismount / 1000 & _ "s. before action.", 937: strHTML, _ 938: False, _ 939: False, _ 940: False, _ 941: False 942: 943: WScript.Sleep (cWaitBeforeDBDismount) 944: 945: AlertHandler objLogFileName, _ 946: "(E2K_StoreDBSink)", _ 947: "Dismounting store '" & _ strStoreName(intIndice) & "'.", _ 948: strHTML, _ 949: False, _ 950: False, _ 951: False, _ 952: False 953: 954: DismountStoreDB (strDNStoreDB (intIndice)) 955: 956: AlertHandler objLogFileName, _ 957: "(E2K_StoreDBSink)", _ 958: "Store '" & strStoreName(intIndice) & "' dismounted.", _ 959: strHTML, _ 960: cMailAlert, _ 961: cEventLogAlert, _ 962: cCommandAlert, _ 963: cPopupAlert 964: Else 965: E2K_StoreDBInfo objLogFileName, _ 966: objWbemObject.TargetInstance, _ 967: objWbemObject.PreviousInstance 968: End If 969: 970: End If 971:Next 972: 973:intRC = WriteToFile (objLogFileName, _ "(E2K_StoreDBSink_OnObjectReady) End Sink.") 974:intRC = WriteToFile (objLogFileName, _ 975: "---------------------------------------------------------------") 976: 977:End Sub ...: ...: The WMI NTProcess Class Event Routine The NTProcess event handler uses the same logic as the Win32_Processor event handler (see Sample 39) to retrieve process names that match those specified in the configuration file. The difference is that the NTProcess event handler allows a wildcard. The string manipulation logic is the same as that in Sample 42 for the Win32_NTLogEvent class. If a retrieved process (Sample 44, lines 1151 and 1152) exceeds its associated maximum (lines 1153 and 1154), the function Win32_NTProcessInfo is invoked to format the related data (lines 1155 to 1157) and send a message. Sample 44 The WMI NTProcess class asynchronous event handler ....: ....: 1131:' ------------------------------------------------------------------- 1132:' NTProcessSink 1133:' ------------------------------------------------------------------- 1134:Sub NTProcessSink_OnObjectReady (objWbemObject, objWbemAsyncContext) 1135: 1136:Dim strTempProcessName 1137: 1138: On Error Resume Next 1139: 1140: intRC = WriteToFile (objLogFileName, _ 1141: "(NTProcessSink_OnObjectReady) Start Sink.") 1142: 1143: For intIndice = 1 To Ubound(strProcessName) 1144: 1145: If strProcessName(intIndice) = "*" Then 1146: strTempProcessName = objWbemObject.TargetInstance.Process 1147: Else 1148: strTempProcessName = strProcessName(intIndice) 1149: End If 1150: 1151: If Ucase(strTempProcessName) = _ 1152: Ucase(objWbemObject.TargetInstance.Process) And _ 1153: CLng(intProcessCPUPercentageMax(intIndice)) <= _ 1154: Clng(objWbemObject.TargetInstance.PercentageProcessTime) Then 1155: Win32_NTProcessInfo objLogFileName, _ 1156: objWbemObject.TargetInstance, _ 1157: objWbemObject.PreviousInstance 1158: Exit For 1159: End If 1160: Next 1161: 1162: intRC = WriteToFile (objLogFileName, _ 1163: "(NTProcessSink_OnObjectReady) End Sink.") 1164: intRC = WriteToFile (objLogFileName, _ 1165: "-----------------------------------------------------------") ....: ....: The WMI ExchangeServerState Class Event Routine The ExchangeServerState event handler logic is the same as the other WMI event handlers. Any modification of an instance of the class triggers a WMI event (see Sample 45). When the loop (lines 990 to 997) matches the triggered event, the E2K_ServerStateInfo function is called (lines 993 to 995) to format the data and send a message. Sample 45 The WMI ExchangeServerState class asynchronous event handler ...: ...: 980:' ----------------------------------------------------------------------- 981:' E2K_ServerStateSink 982:' ----------------------------------------------------------------------- 983:Sub E2K_ServerStateSink_OnObjectReady (objWbemObject, objWbemAsyncContext) 984: 985: On Error Resume Next 986: 987: intRC = WriteToFile (objLogFileName, _ 988: "(E2K_ServerStateSink_OnObjectReady) Start Sink.") 989: 990: For intIndice = 1 To Ubound(strWMIExchangeServerName) 991: If Ucase(strWMIExchangeServerName(intIndice)) = _ 992: Ucase(objWbemObject.TargetInstance.Name) Then 993: E2K_ServerStateInfo objLogFileName, _ 994: objWbemObject.TargetInstance, _ 995: objWbemObject.PreviousInstance 996: End If 997: Next 998: 999: intRC = WriteToFile (objLogFileName, _ 1000: "(E2K_ServerStateSink_OnObjectReady) End Sink.") 1001: intRC = WriteToFile (objLogFileName, _ 1002: "-----------------------------------------------------------") 1003: 1004:End Sub ....: ....: The WMI ExchangeConnectorState Class Event Routine The event handler for the ExchangeConnectorState class uses exactly the same logic as the event handler for ExchangeServerState. See "The WMI ExchangeServerState Class" for more information. Sample 46 The WMI ExchangeConnectorState class asynchronous event handler ....: ....: 1006:' ------------------------------------------------------------------------ 1007:' E2K_ConnectorStateSink 1008:' ------------------------------------------------------------------------ 1009:Sub E2K_ConnectorStateSink_OnObjectReady (objWbemObject, _ objWbemAsyncContext) 1010: 1011: On Error Resume Next 1012: 1013: intRC = WriteToFile (objLogFileName, _ 1014: "(E2K_ConnectorStateSink_OnObjectReady) Start Sink.") 1015: 1016: For intIndice = 1 To Ubound(strWMIExchangeConnector) 1017: If Ucase(strWMIExchangeConnector(intIndice)) = _ 1018: Ucase(objWbemObject.TargetInstance.Name) Then 1019: E2K_ConnectorStateInfo objLogFileName, _ 1020: objWbemObject.TargetInstance, _ 1021: objWbemObject.PreviousInstance 1022: End If 1023: Next 1024: 1025: intRC = WriteToFile (objLogFileName, _ 1026: "(E2K_ConnectorStateSink_OnObjectReady) End Sink.") 1027: intRC = WriteToFile (objLogFileName, _ 1028: "-----------------------------------------------------------") 1029: 1030: End Sub ....: ....: The WMI ExchangeLink Class Event Routine The ExchangeLink class event handler allows a wildcard for the link name. It uses the same string manipulation logic as Sample 42 and Sample 44. The E2K_LinkInfo function (see Sample 47) is called if the following conditions are true: The link name matches a link name specified in the configuration file (see Figure 7, line 115). The increasingTime value reaches the threshold specified in the configuration file. Sample 47 The WMI ExchangeLink class asynchronous event handler 1 2 3 4 5 6 7 x 1234567890123456789012345678901234567890123456789012345678901234567890123456789 ....: ....: 1032:' ------------------------------------------------------------------------ 1033:' E2K_LinkSink 1034:' ------------------------------------------------------------------------ 1035:Sub E2K_LinkSink_OnObjectReady (objWbemObject, objWbemAsyncContext) 1036: 1037:Dim strTempWMILinkName 1038: 1039: On Error Resume Next 1040: 1041: intRC = WriteToFile (objLogFileName, _ 1042: "(E2K_LinkSink_OnObjectReady) Start Sink.") 1043: 1044: For intIndice = 1 To Ubound(strWMIExchangeLink) 1045: If strWMIExchangeLink(intIndice) = "*" Then 1046: strTempWMILinkName = objWbemObject.TargetInstance.LinkName 1047: Else 1048: strTempWMILinkName = strWMIExchangeLink(intIndice) 1049: End If 1050: 1051: If UCase(strTempWMILinkName) = _ 1052: UCase (objWbemObject.TargetInstance.LinkName) And _ 1053: CLng(intWMIExchangeLinkIncreasingTime(intIndice)) <= _ 1054: CLng(objWbemObject.TargetInstance.IncreasingTime) Then 1055: E2K_LinkInfo objLogFileName, _ 1056: objWbemObject.TargetInstance, _ 1057: objWbemObject.PreviousInstance 1058: End If 1059: Next 1060: 1061: intRC = WriteToFile (objLogFileName, _ 1062: "(E2K_LinkSink_OnObjectReady) End Sink.") 1063: intRC = WriteToFile (objLogFileName, _ 1064: "-------------------------------------------------------") 1065: 1066:End Sub ....: ....: The WMI ExchangeQueue Class Event Routine The event handler for the ExchangeQueue class uses the same logic as that for the ExchangeLink class. See "The WMI ExchangeLink Class" for more information. Sample 48 The WMI ExchangeQueue class asynchronous event handler ....: ....: 1068:' ------------------------------------------------------------------------ 1069:' E2K_QueueSink 1070:' ------------------------------------------------------------------------ 1071:Sub E2K_QueueSink_OnObjectReady (objWbemObject, objWbemAsyncContext) 1072: 1073:Dim strTempWMIQueueName 1074: 1075: On Error Resume Next 1076: 1077: intRC = WriteToFile (objLogFileName, _ 1078: "(E2K_QueueSink_OnObjectReady) Start Sink.") 1079: 1080: For intIndice = 1 To Ubound(strWMIExchangeQueue) 1081: If strWMIExchangeLink(intIndice) = "*" Then 1082: strTempWMIQueueName = objWbemObject.TargetInstance.QueueName 1083: Else 1084: strTempWMIQueueName = strWMIExchangeQueue(intIndice) 1085: End If 1086: 1087: If Ucase(strTempWMIQueueName) = _ 1088: UCase (objWbemObject.TargetInstance.QueueName) And _ 1089: CLng(intWMIExchangeQueueIncreasingTime(intIndice)) <= _ 1090: CLng(objWbemObject.TargetInstance.IncreasingTime) Then 1091: E2K_QueueInfo objLogFileName, _ 1092: objWbemObject.TargetInstance, _ 1093: objWbemObject.PreviousInstance 1094: End If 1095: Next 1096: 1097: intRC = WriteToFile (objLogFileName, _ 1098: "(E2K_QueueSink_OnObjectReady) End Sink.") 1099: intRC = WriteToFile (objLogFileName, _ 1100: "-----------------------------------------------------------") 1101: 1102:End Sub ....: ....: The WMI ExchangeClusterResource Class Event Routine The event handler for the ExchangeClusterResource uses the same logic as that for the ExchangeServerState. See "The WMI ExchangeServerState Class" for more information. Note that this event is relevant only if you run the script with an Exchange 2000 cluster. Sample 49 The WMI ExchangeClusterResource class asynchronous event handler ....: ....: 1104:' ------------------------------------------------------------------------ 1105:' E2K_ClusterResourceSink 1106:' ------------------------------------------------------------------------ 1107:Sub E2K_ClusterResourceSink_OnObjectReady (objWbemObject, _ objWbemAsyncContext) 1108: 1109: On Error Resume Next 1110: 1111: intRC = WriteToFile (objLogFileName, _ 1112: "(E2K_ClusterResourceSink_OnObjectReady) Start Sink.") 1113: 1114: For intIndice = 1 To Ubound(strWMIExchangeClusterVName) 1115: If Ucase(strWMIExchangeClusterVName(intIndice)) = _ 1116: Ucase(objWbemObject.TargetInstance.VirtualMachine) Then 1117: E2K_ClusterResourceInfo objLogFileName, _ 1118: objWbemObject.TargetInstance, _ 1119: objWbemObject.PreviousInstance 1120: End If 1121: Next 1122: 1123: intRC = WriteToFile (objLogFileName, _ 1124: "(E2K_ClusterResourceSink_OnObjectReady) End Sink.") 1125: intRC = WriteToFile (objLogFileName, _ 1126: "-----------------------------------------------------------") 1127: 1128:End Sub ....: ....: Additional Functions Not Directly Related to Exchange 2000 Management The E2KWatch script includes a large set of helper functions that allow you to use the script in a production environment. Helper functions include the following. Possible Enhancements The E2KWatch leaves plenty of room for improvement. Possible adaptations are infinite and will probably never perfectly match everyone's needs. Nevertheless, here are some general functions that can be added to the script: The script sends a message if CPU usage (process or processor) reaches a certain threshold. A helpful enhancement is to send an alert if the CPU usage threshold is maintained for a certain period of time. In a real business environment, it is normal to have some CPU peak usage without any problems. On the other hand, having a CPU usage of 90 percent for 15 minutes or more can indicate a serious performance problem. The script, as it is here, would send many alerts, even for normal peak usage. As the script is here, if a monitored service is changed from a manual or automatic startup mode to a disabled startup mode, the script tries to start the service anyway. This results in an error because the service manager does not allow a disabled service to start. Another possible enhancement would remedy this situation. The script saves all server activity in a log file. In its current version, this file is never purged. A helpful enhancement would use the WMI event timer to purge or archive the log each day. The configuration file holds all the parameters for the script. When a change is made to this file, is it necessary to restart the script to put the changes into effect. You might want to program an asynchronous event with the CIM_DATAFile class to monitor the configuration file. With this enhancement, when a change is made to the file, the script cancels all WMI asynchronous events registered, substitutes data from the updated configuration file, and reprograms all the events. The script monitors the links with the ExchangeLink class based on the increasingTime value. If the link is a dial-up link, the script sends an alert because the number of messages in the queue increases regularly. But in the case of a scheduled link, this is normal because the link waits for the next scheduled time to establish the connection. Another enhancement would allow the script to take this particularity into account. The following screen shot illustrates what happens when you run the E2KWatch script. Conclusion Microsoft® Exchange 2000 Server offers a set of concepts and technologies oriented to the future. These technologies are based on industry standards such as LDAP and X.500, WebDAV, XML, IMAP4, SMTP, X.400, and so on. Learning about these standards is a big step toward learning Microsoft® Windows® 2000 and Exchange 2000. By adding COM abstraction layers based on these standards (for instance, CDO for Exchange 2000 or WebDAV), Microsoft has moved its Exchange technology to Internet standards. The knowledge of the COM object model in environments like Windows 2000 and Exchange 2000 is key to developing new applications. From an enterprise management point of view, Windows Management Instrumentation (WMI) and CDO for Exchange Management (CDOEXM) are two key players Developing under Windows 2000 and Exchange 2000 changes the challenges from learning the programming language to learning the industry standards and the COM abstraction layers. Appendix An Overview of Exchange 2000 WMI Providers Exchange 2000 setup adds three WMI providers: the Exchange Routing Table provider, the Exchange Queue provider, and the Exchange Cluster provider. These providers allow any application to access Exchange 2000 management information. Each of the three providers relates to a specific component set of Exchange 2000. The Exchange Routing Table provider runs on top of the routing API. The Exchange Queue provider runs on top of the Queue API. The Exchange Cluster Provider runs on top of the Cluster API. These three providers expose a set of classes available from the root\CIMV2\Applications\Exchange WMI namespace. Exchange 2000 Service Pack 2 adds two new WMI providers: The Exchange Message Tracking provider runs on top of the message tracking API. The Exchange DS Access provider runs on top of the DSAccess API. These two providers expose a set of classes available from the root\MicrosoftExchangeV2 WMI namespace. These providers deliver information to help diagnose system problems and notify the appropriate system components to solve them. Figure 9 is a diagram of the WMI classes. The Exchange 2000 Routing Table Provider The Exchange Routing Table WMI provider works on top of the Exchange Transport Core. The purpose of this provider is: To publish the status of the local Exchange 2000 server in the routing table, based on the monitoring conditions configured with the Exchange System Manager (ESM). To retrieve the states of other Exchange 2000 servers in the organization from the routing table, based on the monitoring conditions configured with the ESM. To publish the states of the Exchange 2000 local connectors in the routing table. To retrieve the states of other Exchange 2000 connectors in the organization from the routing table. To perform these tasks, the Routing Table provider implements a path to and from the routing table for the system attendant only. Every Exchange 2000 server in the organization publishes its states in the routing table in this way. This means that it is possible to retrieve the states of all the servers and connectors in the organization from any individual server. To distinguish between server information and connector information, the Routing Table provider implements two WMI classes: ExchangeServerState and ExchangeConnectorState. Each of these classes has a specific set of properties. The ExchangeServerState Class The states retrieved from this class are based on the monitoring conditions specified with the ESM. With the ESM (Figure 10), you can define a monitoring condition with a corresponding state for: Queues Disk space available on any disk in the system Memory usage The CPU usage for a "warning" or a "critical" threshold Other Windows 2000 services relevant to Exchange 2000Figure 10 Exchange System Manager configuration settings for monitoringFigure 11 Exchange System Manager configuration settings for CPU monitoring The ESM monitoring configuration user interface is saved as a string in the Configuration Naming Context of Microsoft® Active Directory®. No direct programming of the WMI infrastructure is done from the ESM. The System Attendant programs WMI by reading the same string from Active Directory Configuration Naming context. The ESM accesses the Exchange Routing Table provider through the WMI MGMT layer (see Figure 9) and displays the ServerStateString property of the ExchangeServerState class and the IsUp property from the ConnectorState class. The ESM display is refreshed at regular intervals or can be refreshed by the administrator. This is what you see in the ESM view in Figure 12. For complete information about the ExchangeServerState WMI class, including property descriptions, see the Exchange SDK at. The ExchangeConnectorState Class The ExchangeConnectorState class is based on the same principle as the ExchangeServerState class. The ExchangeConnectorState class monitors the connectors in an Exchange organization and publishes their states in the routing table. When this class is interrogated from ESM or by a script, it offers a list of the connectors available from the routing table with their corresponding states. For complete information about the ExchangeConnectorState WMI class, including property descriptions, see the Exchange SDK at. The Exchange 2000 WMI Queue Provider The WMI Queue provider is based on the Queue API. It does not use any mechanism to publish or retrieve information from the routing table with the help of the WMI Routing Table provider. The scope of this provider is local to the Exchange 2000 server. The provider implements two WMI classes: The ExchangeLink class to retrieve information about the Exchange links directly from the Queue API. The ExchangeQueue class to retrieve information about the Exchange queues directly from the Queue API. From a scripting and management perspective, the most interesting data for both classes is the IncreasingTime property (see Sample 3 and Sample 4). The Queue API does not provide this property directly. Instead, it is calculated by the WMI Queue provider. The IncreasingTime value represents the length of time during which the number of messages in the link or queue has not decreased. The time is returned in milliseconds. To monitor this property, it is important to understand how the calculation is made. Two important factors influence how to monitor the value: The sampling frequency (Tx interval on Figure 13) The IncreasingTime value threshold to determine a critical situation Each time a sample is taken, the IncreasingTime property is calculated and a new value is determined. The first polling interval (A on Figure 13) always returns an IncreasingTime value of zero because it is the first sample read. But when the second polling interval occurs (A' on Figure 13), the IncreasingTime value will be equal to the T1 interval because the number of messages has not decreased between A and A'. When the next polling interval occurs (B on Figure 13), the IncreasingTime value will be equal to zero because the number of messages has decreased between A' and B. Note that the number of messages has decreased below the prior measured level. If the number of messages does not decrease below the prior level, the IncreasingTime value is equal to T1+T2. If the number of messages decreases to below the prior level, then the IncreasingTime value is set to zero. For the same situation, if the polling interval is changed to measure at point A for the first sample and at point C for the second sample, the IncreasingTime value will not be equal to zero because the number of messages has not been detected as decreasing below the number of messages measured at point A. The IncreasingTime value will be equal to T1+T2+T3+T4. Furthermore, if the polling is done between: A and B: IncreasingTime is equal to T1+T2. B and C: IncreasingTime is equal to T1+T2+T3+T4. C and D: IncreasingTime is equal to T1+T2+T3+T4+T5+T6. D and E: IncreasingTime is equal to zero. If the polling is done between: A and A': IncreasingTime is equal to T1. A' and B: IncreasingTime is equal to zero. B and B': IncreasingTime is equal to T3. B' and C: IncreasingTime is equal to T3+T4. C and C': IncreasingTime is equal to T3+T4+T5. C' and D: IncreasingTime is equal to T3+T4+T5+T6. D and D': IncreasingTime is equal to zero. D' and E: IncreasingTime is equal to T8. E and E': IncreasingTime is equal to zero. By changing the sampling frequency, you see that the IncreasingTime value can be very different. With the slowest sampling frequency, you miss the decreasing period between A and B. With the fastest polling frequency, this decreasing period is detected. But what if you want to look at this behavior on a real-time scale? Suppose that the Tx interval is 1 minute, the alert threshold for the IncreasingTime value is 30000 milliseconds (30 seconds) and the polling interval is Tx (every 1 minute). In this case, you get a notification at point A'. This is useless because the queue has decreased between A' and B. In a production environment the number of messages can increase suddenly for 1 minute (or more), but the number of messages can decrease just as rapidly. Again, you get an alert for a non-critical situation. Now, suppose that the Tx interval is 1 hour, the alert threshold for the IncreasingTime value is 14400000 milliseconds (4 hours) and the polling interval is Tx (every 1 hour). In this case, you will only get a notification at point D. This is a better situation, because you are notified for a non-decreasing situation after 4 hours. This is too long to wait for a problem to be noted. The key is to find a polling interval to quickly detect any non-decreasing situations without getting an alert for temporary increasing situations. A best practice might be to poll every 10 seconds and to set an IncreasingTime threshold value of 1800000 milliseconds (30 minutes). In this case, the flow represented in Figure 13 does not generate an alert. Only real non-decreasing situations longer than 30 minutes will be detected with a precision of 10 seconds. For complete information about the ExchangeLink and ExchangeQueue WMI classes, including property descriptions, see the Exchange SDK at. The Exchange 2000 WMI Cluster Provider The Cluster WMI provider implements only one class directly based on the cluster service, the ExchangeClusterResource WMI class. The State property contains the state of the cluster group. For complete information about the ExchangeClusterResource WMI class, including property descriptions, see the Exchange SDK at. The Exchange 2000 WMI Message Tracking Provider The Message Tracking WMI provider implements only one class directly based on the Message Tracking service, the Exchange_MessageTrackingEntry class. For complete information about the Exchange_MessageTrackingEntry WMI class, including property descriptions, see the Exchange SDK at. The Exchange 2000 WMI DS Access DC Provider The DS Access WMI provider implements only one class directly based on the DS Access API, the Exchange_DSAccessDC WMI class. For complete information about the Exchange_DSAccessDC WMI class, including property descriptions, see the Exchange SDK at. ADSI Helper Functions Note This document does not focus on ADSI. See the MSDN Library or the Compaq Active Answers Web site at for more information about ADSI search operations. Note Addresses of third-party Web sites referenced in this white paper are provided to help you find the information you need. This information is subject to change without notice. Microsoft in no way guarantees the accuracy of this third-party information. Getting the Exchange Organization Name The Exchange Organization name is located in the Active Directory Configuration Naming Context. Only one Exchange 2000 organization in each Windows 2000 forest is supported. A simple Active Directory Service Interfaces (ADSI) script can retrieve the Exchange organization name by looking in the container dedicated to Exchange 2000. This is the purpose of Sample 50. The script makes a loop (lines 33 to 41) and returns the cn (line 38) of the first msExchOrganizationContainer object class found. If a forest could support many Exchange organizations, this method would not be valid. Under Windows 2000 and Exchange 2000, only one organization is supported. This function is used in Sample 6, in Sample 25, and in Sample 26. Sample 50 Retrieving the Exchange Organization name 1:' This VBScript script verifies that Exchange 2000 is installed, and ' returns the organization name. .: 8:Option Explicit 9: 10:' ------------------------------------------------------------------------ 11:Function GetExchangeOrg () ..: ..: 18: On Error Resume Next 19: 20: Set ObjRoot = GetObject("LDAP://RootDSE") 21: strConfigNC = ObjRoot.Get("configurationNamingContext") 22: WScript.DisconnectObject ObjRoot 23: Set ObjRoot = Nothing 24: 25: Set objExchangeContainer = GetObject _ ("LDAP://CN=Microsoft Exchange,CN=Services," & strConfigNC) 26: 27: If Err.Number Then 28: ' The Exchange container is not present. Exchange 2000 is ' not installed. 29: GetExchangeOrg = "" 30: Exit Function 31: End If 32: 33: For Each objExchangeOrg In objExchangeContainer 34: If (objExchangeOrg.Class = "msExchOrganizationContainer") Then 35: Wscript.Echo "Found Exchange Organization called '" & _ 36: objExchangeOrg.Get ("cn") & "'." 37: Wscript.Echo 38: GetExchangeOrg = objExchangeOrg.Get ("cn") 39: Exit Function 40: End If 41: Next 42: 43: GetExchangeOrg = "" 44: 45:End Function Querying Active Directory with ADSI and ADO Querying Active Directory with scripts means using ActiveX® Data Objects (ADO) (see Figure 2). ADO relies on ADSI and an Active Directory OLE DB provider. This OLE DB provider allows you to search Active Directory. ADO establishes the connection to this provider and transmits a query command. If the retrieved data from the search must be made after the query, you must use ADSI, not ADO. The Active Directory OLE DB provider is a read-only provider. The purpose of this document is not to focus on ADSI with ADO. The function is included here for reference purposes. It is used in Sample 10, in Sample 12, in Sample 25, and in Sample 26. Sample 51 Searching in Active Directory 1:' This VBScript script uses the ADSI OLE DB interface to search Active ' Directory. ..: 13:Option Explicit 14: 15:' ------------------------------------------------------------------------ 16:Function ADSearch _ 17: (strNamingContextSearch, strFilterSearch, _ strAttribsToReturnSearch, strDepthSearch, boolEcho) ..: ..: 34: Set objDictionary = CreateObject ("Scripting.Dictionary") 35: 36: Set objADOConnnection = CreateObject("ADODB.Connection") 37: objADOConnnection." 60: 61: objCommand.CommandText = strADsPathSearch & ";" & _ 62: strFilterSearch & ";" & _ 63: strAttribsToReturnSearch & ";" & _ 64: strDepthSearch ..: ..: 67: If boolEcho Then WScript.Echo "Searching ..." & vbCRLF 68: Set objRecordSet = objCommand.Execute 69: 70: objDictionary.Add "RecordCount", objRecordSet.RecordCount 71: 72: While Not objRecordSet.EOF 73: For intIndice = 0 To objRecordSet.Fields.Count - 1 ..: ..: 91: ' Determine whether the returned value is multivalued. 92: If IsArray (objRecordSet.Fields(intIndice).Value) Then 93: intElements = 0 94: For Each varElements In _ objRecordSet.Fields(intIndice).Value 95: objDictionary.Add _ objRecordSet.Fields(intIndice).Name & ":" & 96: objRecordSet.AbsolutePosition & "/" & _ 97: intIndice & ":" & intElements, _ 98: varElements 99: intElements = intElements + 1 100: Next 101: Else 102: objDictionary.Add _ objRecordSet.Fields(intIndice).Name & ":" & _ 103: objRecordSet.AbsolutePosition & "/" & intIndice, _ 104: objRecordSet.Fields(intIndice).Value 105: End If 106: Next ...: ...: 110: objRecordSet.MoveNext 111: Wend ...: ...: 115: Set ADSearch = objDictionary 116: 117: Set objNamingContext = Nothing ...: ...: 128:End Function
https://technet.microsoft.com/en-us/library/cc750307(d=printer).aspx
CC-MAIN-2015-14
refinedweb
27,763
50.33
(For more resources on this topic, see here.) What is ORM? ORM connects business objects and database tables to create a domain model where logic and data are presented in one wrapping. In addition, the ORM classes wrap our database tables to provide a set of class-level methods that perform table-level operations. For example, we might need to find the Employee with a particular ID. This is implemented as a class method that returns the corresponding Employee object. In Ruby code, this will look like: employee= Employee.find(1) This code will return an employee object whose ID is 1. Exploring Rhom Rhom is a mini Object Relational Mapper (ORM) for Rhodes. It is similar to another ORM, Active Record in Rails but with limited features. Interaction with the database is simplified, as we don't need to worry about which database is being used by the phone. iPhone uses SQLite and Blackberry uses HSQL and SQLite depending on the device. Now we will create a new model and see how Rhom interacts with the database. Time for action – Creating a company model We will create a model company. In addition to a default attribute ID that is created by Rhodes, we will have one attribute name that will store the name of the company. Now, we will go to the application directory and run the following command: $ rhogen model company name which will generate the following: [ADDED] app/Company/index.erb [ADDED] app/Company/edit.erb [ADDED] app/Company/new.erb [ADDED] app/Company/show.erb [ADDED] app/Company/index.bb.erb [ADDED] app/Company/edit.bb.erb [ADDED] app/Company/new.bb.erb [ADDED] app/Company/show.bb.erb [ADDED] app/Company/company_controller.rb [ADDED] app/Company/company.rb [ADDED] app/test/company_spec.rb We can notice the number of files generated by the Rhogen command. Now, we will add a link on the index page so that we can browse it from our homepage. Add a link in the index.erb file for all the phones except Blackberry. If the target phone is a Blackberry, add this link to the index.bb.erb file inside the app folder. We will have different views for Blackberry. <li> <a href="<%= url_for :controller => :Company %>"><span class ="title"> Company</span><span class="disclosure_indicator"/></a> </li> We can see from the image that a Company link is created on the homepage of our application. Now, we can build our application to add some dummy data. You can see that we have added three companies Google, Apple, and Microsoft. What just happened? We just created a model company with an attribute name, made a link to access it from our homepage, and added some dummy data to it. We will add a few companies' names because it will help us in the next section. Association Associations are connections between two models, which make common operations simpler and easier for your code. So we will create an association between the Employee model and the Company model. Time for action – Creating an association between employee and company The relationship between an employee and a company can be defined as "An employee can be in only one company but one company may have many employees". So now we will be adding an association between an employee and the company model. After we make entries for the company in the company model, we would be able to see the company select box populated in the employee form. The relationship between the two models is defined in the employee.rb file as: belongs_to :company_id, 'Company' Here, Company corresponds to the model name and company_id corresponds to the foreign key. Since at present we have the company field instead of company_id in the employee model, we will rename company to company_id. To retrieve all the companies, which are stored in the Company model, we need to add this line in the new action of the employee_controller: @companies = Company.find(:all) The find command is provided by Rhom, which is used to form a query and retrieve results from the database. Company.find(:all) will return all the values stored in the Company model in the form of an array of objects. Now, we will edit the new.erb and edit.erb files present inside the Employee folder. <h4 class="groupTitle">Company</h4> <ul> <li> <select name="employee[company_id]"> <% @companies.each do |company|%> <option value="<%= company.object%>" <%= "selected" if company.object == @employee. company_id%> > <%=company.name %></option> <%end%> </li> </ul> If you observe in the code, we have created a select box for selecting a company. Here we have defined a variable @companies that is an array of objects. And in each object we have two fields named company name and its ID. We have created a loop and shown all the companies that are there in the @companies object. In the above image the companies are populated in the select box, which we added before and it is displayed in the employee form. What just happened? We just created an association between the employee and company model and used this association to populate the company select box present in the employee form. As of now, Rhom has fewer features then other ORMs like Active Record. As of now there is very little support for database associations. (For more resources on this topic, see here.) Exploring methods available for Rhom Now, we will learn various methods available in Rhom for CRUD operation. Generally, we need to Create, Read, Update, and Delete an object of a model. Rhom provides various helper methods to carry out these operations: - delete_all: deletes all the rows that satisfy the given conditions. Employee.delete_all(:conditions => {gender=>'Male'})The above command will delete all the male employees. - destroy: this destroys the Rhom object that is selected. @employee = Employee.find(:all).firstThis will delete the first object of employees, which is stored in @employee variable. @employee.destroy - find: this returns Rhom object(s) based on arguments. We can pass the following arguments: - :all: returns all objects from the model - :first: returns the first object - :conditions: this is optional and is a hash of attribute/values to match with (i.e. {'name' => 'John'}) - :order: it is an optional attribute that is used to order the list - :orderdir: it is an optional attribute that is used to order the list in the desired manner ('ASC' - default, 'DESC' ) - :select: it is an optional value which is an array of strings that are needed to be returned with the object - :per_page: it is an optional value that specifies the maximum number of items that can be returned - :offset: it is an optional attribute that specifies the offset from the beginning of the list - Example: @employees = Employee.find(:all, :order => 'name', :orderdir => 'DESC')This will return an array of employee objects that are ordered by name in the descending order: Employee. find( :all,:conditions =>["age > 40"], :select => [name,company] )It will return the name and company of all the employees whose age is greater than 40. - new : Creates a new Rhom object based on the provided attributes, or initializes an empty Rhom object. @company = Company.new({'name'=>'ABC Inc.')It will only create an object of Company class and will save to the database only on explicitly saving it. - save : Saves the current Rhom object to the database. @company.saveIt will save company object to the database and returns true or false depending on the success of the operation. - Create : Creates a new Rhom object and saves it to the database. This is the fastest way to insert an item to a database. @company = Company.create({'name' => 'Google'})It will insert a row with the name "Google" in the database. - Paginate: It is used to display a fixed number of records on each page. paginate(:page => 1, :per_page => 20)It will return records numbered from 21 to 40. - update_attributes(attributes): Updates the specified attributes of the current Rhom object and saves it to the database. @employee = Employee.find(:all).firstThe age of the first employee stored in the database is updated to 23. @employee. update_attributes({'age' => 23}) We have now understood all the basic helper methods available in Rhom that will help us to perform all the basic operations on the database. Now we will create a page in our application and then use the find method to show the filtered result. Time for action – Filtering record by company and gender We will create a page that will allow us to filter all the records based on company and gender, and then use the find command to show the filtered results on the next page. We will follow these steps to create the page: - Create a link for filter page on the home page i.e. index.erb in the app folder: <li> <a href="<%= url_for :controller => :Employee, :action => : filter_employee_form %>"><span class="title"> Filter Employee </ span><span class="disclosure_indicator"/></a> </li> We can see in the screenshot that a Filter Employee link is created on the home page. - Create an action filter_employee_form in employee_controller.rb: def filter_employee_formWe have used the find helper provided by Rhom that will retrieve all the companies and store them in @companies. @companies = Company.find(:all) end - Create a page filter_employee_form.erb in the app/Employee folder and write the following code: As we can see, this page is divided into three sections: toolbar, title, and content. If we see the content section, we have created radio buttons for Gender and a select box to list all the companies. We can select either Male or Female from the radio button and one company from the list of dynamically populated companies. <div class="pageTitle"> <h1>Filter Page</h1> </div> <div class="toolbar"> <div class="leftItem backButton"> <a class="cancel" href="<%= url_for :action => :index >">Cancel</a> </div> </div> <div class="content"> <form method="POST" action="<%= url_for :controller => :Employee, :action => :filter_employee_result %>"> <h4 class="groupTitle">Gender</h4> <ul> <li><label for="gender">Male</label> <input type="radio" name="gender" value="Male"/> </li> <li><label for="gender">Female</label> <input type="radio" name="gender" value="Female"/> </li> </ul> <h4 class="groupTitle">Company</h4> <ul> <li> <select name="company_id"> <% @companies.each do |company|%> <option value="<%= company.object%>" > <%=company.name %></option> <%end%> </li> </ul> <input type="submit" class="standardButton" value="Filter" /> </form> </div> - Create an action filter_employee_result in employee_controller.rb: The conditions symbol in the find statement is used to specify the condition for the database query and @params is a hash that contains the selections made by the user in the filter form. @params['gender'] and @params['company'] contains the gender and company_id is selected on the filter page. def filter_employee_result @employees = Employee.find(:all, :conditions =>{'gender' => @ params['gender'],'company_id'=> @params['company_id']}) end - Create a file called filter_employee_result.erb and place it in the app/Employee folder. <div class="pageTitle"> <h1>Filter by Company and Gender</h1> </div> <div class="toolbar"> <div class="leftItem regularButton"> <a href="<%= Rho::RhoConfig.start_path %>">Home</a> </div> <div class="rightItem regularButton"> <a class="button" href="<%= url_for :action => :new %>">New</a> </div> </div> <div class="content"> <ul> <% @employees.each do |employee| %> <li> <a href="<%= url_for :action => :show, :id => employee.object %>"> <span class="title"><%= employee.name %> </span><span class="disclosure_indicator"></span> </a> </li> <% end %> </ul> </div> This result page is again divided into three sections: toolbar, title, and content. All the employees filtered on the basis of specified selections made on the filter page are stored in @employees and displayed inside the content section of this page. What just happened? We created a filter page to filter all the employees on the basis of their gender and company. Then, by using the find method of Rhom, we filtered employees for the specified gender and company and displayed the results on a new page. (For more resources on this topic, see here.) Have a go hero – find (*args) Advanced proposal We have learnt in this section to use find helper to write only simple queries. To write advanced queries, Rhom provides find (*args) Advanced. A normal query would look like this: @employees = Employee.find(:all, :conditions =>{'gender' => @params['gender'],'company_id'=> @params['company_id']}) This can also be written as: @employees = Employee.find(:all, :conditions =>{ {:name =>'gender' ,:op =>"like"} => @params['gender'], {:name =>'company_id', :op => 'like'}=> @params['company_id']}, :op => 'AND' ) The advantage of using the latter form is that we can write advanced options with our query. Let's say we want to create a hash condition for the following SQL: find( :all, :conditions =>["LOWER(description) like ? or LOWER(title) like ?", query, query], :select => ['title','description'] ) It can be written in this way: find( :all, :conditions => { {:func=>'LOWER', :name=>'description', :op=>'LIKE'}=>query, {:func=>'LOWER', :name=>'title', :op=>'LIKE'}=>query}, :op => 'OR', :select => ['title','description']) How Rhodes stores data As we have already discussed, iPhone and Android use SQLite. And for Blackberry it uses SQLite on a device it supports, otherwise it will use HSQL database. But the question is how does Rhom store data and how can we handle migration? Rhodes provides two ways to store data in a phone: - Property Bag - Fixed Schema Property Bag Property Bag is the default option available for our models. In Property Bag, the entire data is stored in a single table with a fixed number of columns. The table contains the following columns: - Source_id - attribute - object - value - update_type When you use the Property Bag model, you don't have to track schema changes (adding or removing attributes). However, Rhodes uses the Property Bag schema to store app data in a SQL database. If the internal Property Bag schema changes after an application is updated or reloaded, the database will be (re)created and all existing data would be erased. See rhodes\lib\rhodes.rb and rhodes\lib\framework\rhodes.rb for the internal database schema version: DBVERSION = '2.0.3' On the first launch of the application after it is installed/updated/reloaded, a database will be (re)created if app_db_version in the rhoconfig.txt is different from what it was before. If the database version is changed and the database is recreated, then all data in the database will be erased. Since Rhodes 2.2.4, the Rhosync session is kept in the database, so SyncEngine.logged_in will return true. At the start of the application we can check if the database is empty and the user is still logged in and then run sync without interactive login. Application db version in rhoconfig.txt: app_db_version = '1.0' We can list a few advantages and disadvantages of the fixed schema: Advantages - It is simple to use, attributes are not required to be specified before use - We don't need to migrate data if we add/remove attributes Disadvantages - Size is three times bigger than the fixed schema - Slow while synchronizing Fixed Schema model While using the Fixed Schema model, the developer is entirely responsible for the structure of the SQL schema. So when you add or delete some properties, or just change app logic you may need to perform data migration or database reset. To track schema changes, use the schema_version parameter in the model: class Employee include Rhom::FixedSchema set :schema_version, '1.1' end We can see that we have set the schema version to 1.1. Now, if we change the schema then we have to change the version in the model. We can list a few advantages and disadvantages of the fixed schema: Advantages - Smaller size, you can specify index for only required attributes. - Faster sync time than Property bag. Disadvantage - You have to support all schema changes. This is how the model looks in a Fixed Schema: To add a column in the table we have to add a property and then reset the Device. It is important to note that we have to reset the database to reflect the changes. To create an index in fixed schema we will write following line in model: index :by_name_tag, [:name, :tag] #will create index for name and tag columns To create a primary key in the fixed schema we will write following line in model: unique_index :by_phone, [:phone] Choosing between Property Bag or Fixed Schema depends on your requirement. If your schema is not fixed and keeps changing then use Property Bag and if we have huge data and non-changing schema then we use Fixed Schema. Summary We have the covered following topics in this article: - What is ORM - What is Rhom - What is an Association - We have explored various commands provided by Rhom - Difference between Property Bag and Fixed Schema. Further resources on this subject: - Rhomobile FAQs [Article] - An Introduction to Rhomobile [Article] - Getting Started with Internet Explorer Mobile [Article] - jQuery Mobile: Organizing Information with List Views [Article] - jQuery Mobile: Collapsible Blocks and Theming Content [Article] - Fundamentals of XHTML MP in Mobile Web Development [Article]
https://www.packtpub.com/books/content/how-interact-database-using-rhom
CC-MAIN-2016-07
refinedweb
2,803
54.73
Hello all! So I have an assignment to create a matrix multiplication program using classes (no friend functions). To be quite frank, I am completely lost and have no idea what I'm doing here. The directions state that the private class hold 3 matricies and also that I need to have three member functions : one to initiate any program arrays, one that inputs the data, and one that calculates the matrix multiplication. Any input would be greatly appreciated! I would also have to make the program input the matricies row by row as well as ask for a repeat.. Any help would be appreciated as I am thoroughly confused with c++ #include <iostream> using namespace std; class Matrix { int a[3][3]; int b[3][3]; int c[3][3]; int i,j,k; public: void Mult(); void InputMatrix(); void OutputMatrix(); }; void Matrix::InputMatrix() { cout << "Enter the values for the first matrix"; cout << "\n Matrix 1, Row 1:"; for (i=0; i<3; i++) { for (j=0; j<3; j++) { cin >> a[i][j]; } } cout << "Enter the values for the second matrix"; for (i=0; i<3; i++) { for (j=0; j<3; j++) { cin >> b[i][j]; } } } void Matrix::Mult() { for (i=0; i<3; i++) { for (j=0; j<3; j++) { c[i][j]=0; for (k=0; k<3; k++) { c[i][j] += a[i][k] * b[k][j]; } } } } void Matrix::OutputMatrix() { cout << "The Resultant Matrix is: \n"; for (i=0; i<3; i++) { for (j=0; j<3; j++) { cout << c[i][j]; } cout << endl; } } int main() { Matrix x; x.InputMatrix(); x.Mult(); x.OutputMatrix(); system ("pause"); return 0; }
https://www.daniweb.com/programming/software-development/threads/453099/matrix-multiplication-with-classes
CC-MAIN-2019-18
refinedweb
274
55.61
This program prints reverse of a number i.e. if the input is 951 then output will be 159. Java programming source code import java.util.Scanner; class ReverseNumber { public static void main(String args[]) { int n, reverse = 0; System.out.println("Enter the number to reverse"); Scanner in = new Scanner(System.in); n = in.nextInt(); while( n != 0 ) { reverse = reverse * 10; reverse = reverse + n%10; n = n/10; } System.out.println("Reverse of entered number is "+reverse); } } Download Reverse number program class file. Output of program: You can also reverse or invert a number using recursion. You can use this code to check if a number is palindrome or not, if the reverse of an integer is equal to integer then it's a palindrome number else not.
http://www.programmingsimplified.com/java/source-code/java-program-reverse-number
CC-MAIN-2017-09
refinedweb
129
61.02